CN117714867A - Image anti-shake method and electronic equipment - Google Patents

Image anti-shake method and electronic equipment Download PDF

Info

Publication number
CN117714867A
CN117714867A CN202310552171.4A CN202310552171A CN117714867A CN 117714867 A CN117714867 A CN 117714867A CN 202310552171 A CN202310552171 A CN 202310552171A CN 117714867 A CN117714867 A CN 117714867A
Authority
CN
China
Prior art keywords
image
image stream
translation
camera
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310552171.4A
Other languages
Chinese (zh)
Inventor
卢圣卿
王宁
王宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310552171.4A priority Critical patent/CN117714867A/en
Publication of CN117714867A publication Critical patent/CN117714867A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The application relates to the field of images and provides an image anti-shake method and electronic equipment, wherein the image anti-shake method is applied to the electronic equipment, the electronic equipment comprises a first camera, the first camera is a movable camera, and the method comprises the following steps: starting a camera application program; acquiring a first image stream acquired by a first camera; based on any two adjacent frames of images in the first image stream, obtaining global translation quantity; obtaining a plurality of translation compensation amounts based on the plurality of translation postures, wherein the plurality of translation compensation amounts are used for representing pose differences between the plurality of translation postures and the smoothed translation postures, and the plurality of translation compensation amounts are in one-to-one correspondence with the multi-frame images; obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream; displaying the second image stream; based on the technical scheme, under the condition that the camera in the electronic equipment realizes flexible movement, the image anti-shake effect can be improved.

Description

Image anti-shake method and electronic equipment
Technical Field
The application relates to the field of image processing, in particular to an image anti-shake method and electronic equipment.
Background
With the development of shooting functions in electronic devices, camera applications are becoming more and more widely used in electronic devices. At present, due to the limitation of hardware of a camera, the angle of view of the camera has a certain limitation; when a user shoots a long-distance object, a long-focus camera is generally needed, however, the angle of view of the long-focus camera has a certain limitation, and the shooting needs of the user in different scenes cannot be met. To overcome this limitation, flexible movement of the camera is required; meanwhile, after the camera moves, if the image anti-shake processing is performed by adopting the existing anti-shake method, the image anti-shake effect is poor.
Therefore, how to perform image anti-shake processing and improve the anti-shake effect of an image becomes a problem to be solved under the circumstance that the camera in the electronic device realizes flexible movement.
Disclosure of Invention
The application provides an image anti-shake method and electronic equipment, which can improve the anti-shake effect of an image under the condition that a camera in the electronic equipment can flexibly move.
In a first aspect, an image anti-shake method is provided, applied to an electronic device, where the electronic device includes a first camera, and the first camera is a movable camera, and the method includes:
starting a camera application program;
acquiring a first image stream acquired by a first camera, wherein the first image stream comprises multi-frame images;
based on any two adjacent frames of images in the first image stream, obtaining global translation quantity, wherein the global translation quantity is used for representing the translation quantity of the center points of the two adjacent frames of images;
obtaining a plurality of panning gestures based on global translation amount and time difference information, wherein the plurality of panning gestures correspond to the multi-frame images one by one, one panning gesture in the plurality of panning gestures is used for representing the gesture of the electronic equipment for acquiring one frame of image in the multi-frame images, and the time difference information is used for representing the time difference for acquiring the multi-frame images in the first image stream;
Obtaining a plurality of translation compensation amounts based on the plurality of translation postures, wherein the plurality of translation compensation amounts are used for representing pose differences between the plurality of translation postures and the smoothed translation postures, and the plurality of translation compensation amounts are in one-to-one correspondence with the multi-frame images;
obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream;
the second image stream is displayed.
In the scheme of the application, the electronic equipment comprises the movable camera, and the problem that the angle of view of the long-focus camera is limited can be solved by the movement of the movable camera, so that the requirements of different shooting scenes are met; under the condition that the camera in the electronic equipment flexibly moves, the image anti-shake processing of the image stream can be realized according to the image content of the image stream acquired by the movable camera through the scheme of the application; in the scheme of the application, the data in the gyroscope sensor in the electronic equipment is not required to be acquired, and the motion quantity of the electronic equipment is acquired based on the data of the gyroscope sensor; according to the scheme, the translation pose of the electronic equipment when the image stream is acquired can be obtained according to the difference of the image content in the image stream, so that the motion quantity of the electronic equipment is obtained; therefore, in the scheme of the application, under the condition that the camera in the electronic equipment realizes flexible movement, the image anti-shake processing can be performed through the image content of the image stream acquired by the movable camera, so that the anti-shake effect of the image is improved; in addition, the scheme does not need to acquire data in a gyroscope sensor in the electronic equipment; therefore, the scheme of the application does not depend on a gyroscope sensor in the electronic equipment, and the application scene of the image anti-shake method can be improved to a certain extent.
It should be noted that, the data of the gyroscope sensor can only represent the rotation motion of the electronic device, but cannot represent the translation motion of the electronic device; therefore, the movement of the electronic device cannot be accurately detected by the data in the gyro sensor; however, either rotational or translational movement of the electronic device can be manifested in differences in image content; therefore, through the image anti-shake scheme of this application, can improve image anti-shake effect to a certain extent.
It should be appreciated that the panning poses of the images in the first image stream may be derived based on the amount of pixel panning of the images; in the solution of the present application, the translation pose may refer to a translation amount in a camera coordinate system or a world coordinate system obtained by a translation amount in a pixel coordinate system.
With reference to the first aspect, in certain implementations of the first aspect, obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream includes:
obtaining a first correction vector based on the first motor position information, wherein the first motor position information is motor position information when the first camera acquires a first image stream, and the first correction vector is used for performing image rotation correction on the first image stream;
Performing first image processing on the first image stream based on the first correction vector to obtain an image stream subjected to image rotation correction;
and performing second image processing on the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain a second image stream.
In the scheme of the application, a first correction vector for image rotation correction can be obtained according to the motor position of the first camera; performing image rotation correction processing on the first image stream according to the first correction vector to obtain an image stream subjected to image rotation correction; and carrying out EIS processing on the image stream after image rotation correction according to the plurality of compensation translation amounts to obtain the processed image stream.
With reference to the first aspect, in some implementations of the first aspect, performing a second image processing on the image stream after the image rotation correction based on a plurality of translation compensation amounts to obtain a second image stream, including:
and carrying out interpolation algorithm on pixel points of each image in the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain a second image stream.
In the scheme of the application, the pixel points in each image are processed through the interpolation algorithm, so that the calculated amount of the electronic equipment can be reduced to a certain extent.
With reference to the first aspect, in certain implementations of the first aspect, obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream includes:
Obtaining a first correction vector based on the first motor position information, wherein the first motor position information is motor position information when the first camera acquires a first image stream, and the first correction vector is used for performing image rotation correction on the first image stream;
obtaining a plurality of second correction vectors based on the first correction vector and the plurality of translation compensation amounts;
and performing third image processing on the first image stream based on the plurality of second correction vectors to obtain a second image stream.
In the scheme of the application, the image rotation correction processing and the motion compensation of the first image stream are realized through a plurality of second correction vectors, and the processed image stream is obtained; it can be understood that the warp processing is performed once for each pixel point of each frame of image in the first image stream; the amount of computation for performing EIS processing on the image stream can be reduced to some extent.
With reference to the first aspect, in certain implementation manners of the first aspect, performing third image processing on the first image stream based on the plurality of second correction vectors to obtain a second image stream, including:
and carrying out interpolation algorithm on pixel points in a plurality of image frames in the first image stream based on a plurality of second correction vectors to obtain a second image stream.
In the scheme of the application, the pixel points in each image are processed through the interpolation algorithm, so that the calculated amount of the electronic equipment can be reduced to a certain extent.
With reference to the first aspect, in some implementations of the first aspect, obtaining the global translation based on any two adjacent frames of images in the first image stream includes:
performing image feature point detection and feature point matching on any two adjacent frames of images in the first image stream to obtain feature point pairs;
and obtaining global translation based on the translation between the feature point pairs.
In one implementation, the translation of the feature point pairs in the two frames of images may be averaged to obtain a global translation between the two frames of images.
With reference to the first aspect, in certain implementations of the first aspect, deriving a plurality of translational compensation amounts based on a plurality of translational gestures includes:
performing smoothing treatment on the plurality of translation gestures to obtain a plurality of smoothed translation gestures;
and obtaining a plurality of translation compensation amounts based on the position and posture difference values between the plurality of translation positions and the plurality of smoothed translation positions.
In the scheme of the application, by carrying out smoothing processing on the plurality of horizontal postures, the motion quantity between two adjacent frames of image frames in the first image stream is ensured to be relatively stable when the EIS processing is carried out on the images in the first image stream, namely the stability of the images in the image stream is ensured.
With reference to the first aspect, in some implementations of the first aspect, after the first camera rotates, a first offset exists between a center point of a lens of the first camera and an imaging center point.
With reference to the first aspect, in certain implementations of the first aspect, the first camera includes a movable tele camera.
In a second aspect, there is provided an electronic device, comprising: the system comprises one or more processors, a memory and a first camera, wherein the first camera is a movable camera; the memory is coupled to the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform:
starting a camera application program;
acquiring a first image stream acquired by a first camera, wherein the first image stream comprises multi-frame images;
based on any two adjacent frames of images in the first image stream, obtaining global translation quantity, wherein the global translation quantity is used for representing the translation quantity of the center points of the two adjacent frames of images;
obtaining a plurality of panning gestures based on global translation amount and time difference information, wherein the plurality of panning gestures correspond to the multi-frame images one by one, one panning gesture in the plurality of panning gestures is used for representing the gesture of the electronic equipment for acquiring one frame of image in the multi-frame images, and the time difference information is used for representing the time difference for acquiring the multi-frame images in the first image stream;
Obtaining a plurality of translation compensation amounts based on the plurality of translation postures, wherein the plurality of translation compensation amounts are used for representing pose differences between the plurality of translation postures and the smoothed translation postures, and the plurality of translation compensation amounts are in one-to-one correspondence with the multi-frame images;
obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream;
the second image stream is displayed.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
obtaining a first correction vector based on the first motor position information, wherein the first motor position information is motor position information when the first camera acquires a first image stream, and the first correction vector is used for performing image rotation correction on the first image stream;
performing first image processing on the first image stream based on the first correction vector to obtain an image stream subjected to image rotation correction;
and performing second image processing on the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain a second image stream.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
And carrying out interpolation algorithm on pixel points of each image in the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain a second image stream.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
obtaining a first correction vector based on the first motor position information, wherein the first motor position information is motor position information when the first camera acquires a first image stream, and the first correction vector is used for performing image rotation correction on the first image stream;
obtaining a plurality of second correction vectors based on the first correction vector and the plurality of translation compensation amounts;
and performing third image processing on the first image stream based on the plurality of second correction vectors to obtain a second image stream.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
and carrying out interpolation algorithm on pixel points in a plurality of image frames in the first image stream based on a plurality of second correction vectors to obtain a second image stream.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
Performing image feature point detection and feature point matching on any two adjacent frames of images in the first image stream to obtain feature point pairs;
and obtaining global translation based on the translation of the pairs of the feature points.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
performing smoothing treatment on the plurality of translation gestures to obtain a plurality of smoothed translation gestures;
and obtaining a plurality of translation compensation amounts based on the position and posture difference values between the plurality of translation positions and the plurality of smoothed translation positions.
With reference to the second aspect, in some implementations of the second aspect, after the first camera rotates, a first offset exists between a center point of a lens of the first camera and an imaging center point.
With reference to the second aspect, in certain implementations of the second aspect, the first camera includes a movable tele camera.
It should be appreciated that the extensions, definitions, explanations and illustrations of the relevant content in the first aspect described above also apply to the same content in the second aspect.
In a third aspect, there is provided an electronic device, comprising: one or more processors and memory; the memory is coupled to one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions, the one or more processors invoking the computer instructions to cause the electronic device to perform any of the image anti-shake methods of the first aspect.
In a fourth aspect, a chip system is provided, the chip system being applied to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform any of the image anti-shake methods of the first aspect.
In a fifth aspect, there is provided a computer-readable storage medium storing computer program code which, when executed by an electronic device, causes the electronic device to perform any one of the image anti-shake methods of the first aspect.
In a sixth aspect, there is provided a computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform any of the image anti-shake methods of the first aspect.
In the scheme of the application, the electronic equipment comprises the movable camera, and the problem that the angle of view of the long-focus camera is limited can be solved by the movement of the movable camera, so that the requirements of different shooting scenes are met; under the condition that the camera in the electronic equipment flexibly moves, the image anti-shake processing of the image stream can be realized according to the image content of the image stream acquired by the movable camera through the scheme of the application; in the scheme of the application, the data in the gyroscope sensor in the electronic equipment is not required to be acquired, and the motion quantity of the electronic equipment is acquired based on the data of the gyroscope sensor; according to the scheme, the translation pose of the electronic equipment when the image stream is acquired can be obtained according to the difference of the image content in the image stream, so that the motion quantity of the electronic equipment is obtained; therefore, in the scheme of the application, under the condition that the camera in the electronic equipment realizes flexible movement, the image anti-shake processing can be performed through the image content of the image stream acquired by the movable camera, so that the anti-shake effect of the image is improved; in addition, the scheme does not need to acquire data in a gyroscope sensor in the electronic equipment; therefore, the scheme of the application does not depend on a gyroscope sensor in the electronic equipment, and the application scene of the image anti-shake method can be improved to a certain extent.
Drawings
FIG. 1 is a schematic diagram of a hardware system suitable for use with the electronic device of the present application;
FIG. 2 is a schematic diagram of an application scenario according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a camera arrangement on an electronic device;
FIG. 4 is a schematic side view of a movable tele camera provided in the present application;
FIG. 5 is a schematic side view of a movable tele camera according to the present disclosure;
FIG. 6 is a schematic side view of another movable tele camera according to an embodiment of the present disclosure;
fig. 7 is a light path diagram of a movable tele camera according to an embodiment of the present application;
FIG. 8 is a schematic side view of a movable tele camera according to the present disclosure;
FIG. 9 is a schematic side view of a movable tele camera according to an embodiment of the present disclosure;
fig. 10 is a light path diagram of another movable tele camera according to an embodiment of the present disclosure;
fig. 11 is a schematic view of a view angle of a movable tele camera according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram of two frames of images provided in an embodiment of the present application;
fig. 13 is a schematic view of a change of angles of view before and after movement of a movable tele camera according to an embodiment of the present application;
FIG. 14 is a schematic flow chart of an image anti-shake method provided in an embodiment of the present application;
FIG. 15 is a schematic flow chart of another image anti-shake method provided by embodiments of the present application;
FIG. 16 is a schematic diagram of image feature point detection according to an embodiment of the present application;
FIG. 17 is a schematic diagram of a path smoothing process for a translational compensation amount according to an embodiment of the present application;
fig. 18 is a schematic diagram of electronic image stabilization processing for a pixel according to an embodiment of the present application;
FIG. 19 is a schematic flow chart diagram of another image anti-shake method provided by embodiments of the present application;
FIG. 20 is a schematic view of a change in field angle after movement of another movable tele camera provided in an embodiment of the present application;
FIG. 21 is a schematic diagram of a coordinate system mapping relationship provided in an embodiment of the present application;
FIG. 22 is a schematic diagram of a feature point map provided by an embodiment of the present application;
FIG. 23 is a schematic flow chart of another image anti-shake method provided by an embodiment of the present application;
FIG. 24 is a schematic diagram of an electronic device suitable for use in the present application;
fig. 25 is a schematic diagram of another electronic device suitable for use in the present application.
Detailed Description
In the embodiments of the present application, the following terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
First, the terms of art to which the present application relates will be briefly described.
1. Electronic anti-shake (Electric Image Stabilization, EIS)
The electronic anti-shake can also be called electronic image stabilization, and the EIS technology refers to anti-shake processing based on motion sensor data, and motion conditions among image frames in an image sequence are calculated through data acquired by a motion sensor during each frame of image exposure; and correct for motion between image frames to produce a relatively stable image sequence.
2. Rolling Shutter (Rolling screen)
Rolling shift means that at the start of exposure, the Sensor performs exposure line by scanning line by line until all pixel points are exposed.
It should be understood that, for the same image frame, if the Rolling shift exposure mode is adopted, the exposure time of each line in the image is different, so that the image needs to be compensated line by line; this compensation may be referred to as a Rolling shift correction.
Fig. 1 shows a hardware system suitable for use in the electronic device of the present application.
The electronic device 100 may be a cell phone, a smart screen, a tablet computer, a wearable electronic device, an in-vehicle electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a projector, etc., and the specific type of the electronic device 100 is not limited in the embodiments of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 1 does not constitute a specific limitation on the electronic apparatus 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than those shown in FIG. 1, or electronic device 100 may include a combination of some of the components shown in FIG. 1, or electronic device 100 may include sub-components of some of the components shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: application processors (application processor, AP), modem processors, graphics processors (graphics processing unit, GPU), image signal processors (image signal processor, ISP), controllers, video codecs, digital signal processors (digital signal processor, DSP), baseband processors, neural-Network Processors (NPU). The different processing units may be separate devices or integrated devices.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
For example, in an embodiment of the present application, the processor 110 may perform: starting a camera application program; acquiring a first image stream acquired by a first camera, wherein the first image stream comprises multi-frame images; based on any two adjacent frames of images in the first image stream, obtaining global translation quantity, wherein the global translation quantity is used for representing the translation quantity of the center points of the two adjacent frames of images; obtaining a plurality of panning gestures based on global translation amount and time difference information, wherein the plurality of panning gestures correspond to the multi-frame images one by one, one panning gesture in the plurality of panning gestures is used for representing the gesture of the electronic equipment for acquiring one frame of image in the multi-frame images, and the time difference information is used for representing the time difference for acquiring the multi-frame images in the first image stream; obtaining a plurality of translation compensation amounts based on the plurality of translation postures, wherein the plurality of translation compensation amounts are used for representing pose differences between the plurality of translation postures and the smoothed translation postures, and the plurality of translation compensation amounts are in one-to-one correspondence with the multi-frame images; obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream; the second image stream is displayed.
The connection relationships between the modules shown in fig. 1 are merely illustrative, and do not constitute a limitation on the connection relationships between the modules of the electronic device 100. Alternatively, the modules of the electronic device 100 may also use a combination of the various connection manners in the foregoing embodiments.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The electronic device 100 may implement display functions through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
For example, the motor 191 may generate vibration. The motor 191 may be used for incoming call alerting as well as for touch feedback. The motor 191 may generate different vibration feedback effects for touch operations acting on different applications. The motor 191 may also produce different vibration feedback effects for touch operations acting on different areas of the display screen 194. Different application scenarios (e.g., time alert, receipt message, alarm clock, and game) may correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Illustratively, the display screen 194 may be used to display images or video.
Illustratively, the electronic device 100 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
Illustratively, the ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV, etc. format image signal. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Illustratively, the gyroscopic sensor 180B may be used to determine a motion pose of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x-axis, y-axis, and z-axis) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B can also be used for scenes such as navigation and motion sensing games.
Illustratively, in embodiments of the present application, the gyro sensor 180B may be used to collect shake information, which may be used to represent pose changes of the electronic device during shooting.
For example, the acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically, x-axis, y-axis, and z-axis). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The acceleration sensor 180E may also be used to recognize the gesture of the electronic device 100 as an input parameter for applications such as landscape switching and pedometer.
Illustratively, a distance sensor 180F is used to measure distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, for example, in a shooting scene, the electronic device 100 may range using the distance sensor 180F to achieve fast focus.
Illustratively, ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
Fig. 2 is a schematic diagram of an image flow obtained by opening and closing an image anti-shake for the same shooting scene according to an embodiment of the present application.
As shown in fig. 2, the image flow shown in (a) in fig. 2 may represent an image flow obtained by the electronic device by turning off the image anti-shake; the image flow shown in fig. 2 (b) may represent an image flow obtained after the electronic device turns on the anti-shake of the image; as can be seen from the combination of (a) in fig. 2 and (b) in fig. 2, after the anti-shake of the image is turned on in the electronic device, the stability of the image stream can be improved, and larger shake between image frames in the image stream is avoided, so that a relatively stable image stream is generated.
Fig. 3 shows a schematic view of a camera 193 arrangement on an electronic device.
Illustratively, the electronic device provided herein may include one or more cameras 193, where the one or more cameras 193 may be located on the front of the electronic device 100 or on the back of the electronic device 100. The camera 193 located on the front side of the electronic device 100 may be referred to as a front camera, and the camera located on the back side of the electronic device 100 may be referred to as a rear camera. In this application, the camera may also be referred to as a camera module.
For example, in an embodiment of the present application, the electronic device 100 may include 5 cameras 193, the 5 cameras 193 including 2 front cameras and 3 rear cameras. Referring to fig. 3 (a) and 3 (b), the 3 rear cameras are arranged in a row from top to bottom on the rear cover of the electronic device 100, and the 3 rear cameras are a main camera 1931, a telephoto camera 1932, and a wide-angle camera 1933 in order of arrangement.
The focal length of the wide-angle camera 1933 is shorter than that of the main camera 1931, and the focal length of the telephoto camera 1932 is longer than that of the main camera 1931, and the longer the focal length is, the smaller the angle range is, so that the angle range of view of the wide-angle camera 1933 is larger than that of the main camera 1931, and the angle range of view of the telephoto camera 1932 is smaller than that of the Yu Zhu camera 1931.
It should be appreciated that the foregoing is merely an example, and that the electronic device 100 may also include other types of cameras, such as ultra-wide angle cameras, black and white cameras, multispectral cameras, etc., which are not limited in this application.
Illustratively, fig. 4 shows a schematic side view of a movable tele camera 1932 provided in the present application.
As shown in fig. 4, the movable tele camera comprises an optical lens, an OIS controller, a motor assembly comprising a prism and a photosensitive element, wherein the plane of the photosensitive element is perpendicular to the plane of a lens included in the optical lens; in addition, the prism in the motor component is arranged in an inclined state, so that light rays emitted by the optical lens can be refracted onto the photosensitive element; on the basis, the motor in the motor component can also enable the light path emitted by the optical lens to deviate through controlling the prism to rotate, so that the shooting angle range is enlarged. The motor assembly may further comprise an electric motor for driving the motor. Here, the motor in the motor assembly is not the same motor as the OIS motor in the OIS controller.
In some embodiments, referring to fig. 5 (a) and 5 (b), taking the coordinate system shown in fig. 5 (a) as an example, the motor in the motor assembly may control the cube structure in which the prism is located to rotate up and down around the x-axis, i.e. nodding, to expand the field of view in the z-axis direction. For example, as shown in (a) of fig. 6, the dotted line position indicates the initial position of the prism, and the motor may rotate the prism around the x-axis and in the z-axis forward direction to expand the field of view in the z-axis forward direction; alternatively, as shown in fig. 6 (b), the motor may also rotate the prism around the x-axis and in the negative z-axis direction to expand the field of view in the negative z-axis direction.
Fig. 7 is a schematic view of the optical path corresponding to the prism shown in fig. 6 (a) when the prism moves in the direction of the nodding. As shown in fig. 7, when the prism is not moving, the two incident light rays are reflected by the prism, and the imaging points on the photosensitive element are P and Q respectively. When the prism moves to the position of the prism 'in fig. 7 along the nodding direction shown in fig. 7, the same two incident light rays are reflected by the prism', and the imaging points on the photosensitive element are P 'and Q', respectively, wherein P 'corresponds to P and Q' corresponds to Q before and after the movement.
Here, since the nodding motion is in the optical path plane, the prism is displaced when nodding, and the displacements are almost equal. From this, it is clear that the length of the line segment PQ before the nodding movement is substantially equal to the length of the line segment P 'Q' after the nodding movement. Therefore, when the nod moves, the image does not shake in the y-axis direction, namely the problem of image rotation does not occur, and other distortions are avoided.
In other embodiments, referring to fig. 8 (a) and 8 (b), taking the coordinate system shown in fig. 8 (a) as an example, the motor in the motor assembly may control the cube structure in which the prism is located to rotate left and right around the y-axis, i.e. to swing, so as to expand the field of view in the x-axis direction. For example, as shown in (a) of fig. 9, the dotted line position indicates the initial position of the prism, and the motor may rotate the prism around the y-axis and in the x-axis forward direction to expand the field of view in the x-axis forward direction; alternatively, as shown in (b) of fig. 9, the motor may also rotate the prism around the y-axis and in the negative x-axis direction to expand the field of view in the negative y-axis direction.
Referring to fig. 10 for example, fig. 10 shows a schematic view of the optical path corresponding to the prism shown in fig. 9 (a) when moving in the panning direction. As shown in fig. 10, the incident light and the reflected light are symmetrical about the optical axis, and the optical axis is rotated along with the rotation of the prism. When the prism is moved by shaking the head, such as the angle of shaking the head theta, the horizontal coordinate of the incident point on the prism surface is slightly changed, the optical path is changed, and the reflection angle is changed. Whereas a change in reflection angle will result in an image rotated approximately a theta angle in the x-axis direction and a delta angle (cross talk) in the y-axis direction.
As shown in fig. 10, when the prism is not moving, an incident light beam is reflected by the prism, and the imaging point on the photosensitive element is P2. When the prism moves to the position of the prism ' shown in fig. 10 in the panning direction shown in fig. 10, the same incident light is reflected by the prism ', and the imaging point on the photosensitive element is P2'. Here, before and after the prism shaking head, P2 and P2 'are coplanar, but the line segment P2' is not parallel to the x-axis and not parallel to the y-axis, in other words, the imaging point moves obliquely in the xy plane before and after the movement.
Further, before the prism rotates, if one of the two incident light rays is O1 (assumed to be the center of the prism), the imaging point at the photosensitive element is P1; the other ray incidence point is O2 (assuming non-center of prism), and the corresponding imaging point at the photosensitive element is P2. After the prism is rotated by an angle theta, the imaging point of the incident point O1 on the photosensitive element is P1', and the imaging point of the incident point O2 on the photosensitive element is P2'. The rotation of the image at O1 is smaller and the rotation at O2 is larger due to the change of the optical path, so that the translation corresponding to the change of P1 to P1 'is different from the translation corresponding to the change of P2 to P2'.
Illustratively, the change in the angle of view at the time of photographing with the movable tele camera as shown in fig. 4 is shown in fig. 11; the initial field angle range corresponding to the tele camera is FOV0, and the initial field angle range is rectangular; when the motor component is started, the prism is controlled to perform spot head movement and the movement amplitude is maximum, the field angle range corresponding to the tele camera can be changed upwards to FOV1 or downwards to FOV2; when the motor component is started and then the prism is controlled to perform head shaking motion and the motion amplitude is maximum, the field angle range corresponding to the tele camera can be changed leftwards to FOV3 or rightwards to FOV4. When the motor assembly is started, the control prism performs both nodding and panning (the sequence is not limited), and the range of the field angle corresponding to the tele camera can be changed from the furthest upper right angle to the FOV5, from the lower right angle to the FOV6, from the lower left angle to the FOV7, from the upper left angle to the FOV8, and the like.
If the angle of rotation of the prism is smaller when the motor component is rotated to control the prism to perform nodding motion and panning motion, the FOV which can be realized by the tele camera is within the visual field range formed by FOV3 to FOV 8. In addition, when all FOVs are overlapped, the maximum FOV which can be realized by the tele camera can be obtained. The maximum FOV is far larger than the FOV0 in the initial field angle range, namely the expansion of the field angle range can be realized under the condition that the rotation motor controls the prism to rotate by the long-focus camera.
Here, it should be further noted that, when the rotation motor controls the prism to perform the nodding motion, no image rotation problem occurs in the y-axis direction, and no other distortion occurs; thus, FOV1 and FOV2 remain substantially unchanged in shape relative to FOV0, or are rectangular in shape, which is flat and vertical. When the rotation motor controls the prism to make a panning motion, an image rotation problem occurs, and thus FOV3 to FOV8 are changed with respect to FOV0, which can be regarded as being translated+rotated. For this reason, during imaging, images corresponding to FOV3 to FOV8 captured by the telephoto camera are all images having an image rotation problem.
Illustratively, fig. 12 shows two frames of images captured with a tele camera. As shown in fig. 12 (a), the playing card image shot before the rotation motor in the tele camera does not move is shown, and characters and figures in the image can be normally displayed; as shown in fig. 12 (b), the image of the playing card photographed after the swing motion of the rotary motor in the tele camera is shown, and the characters and graphics in the image generate the image rotation problem.
In addition, the expansion of the view angle range can be realized through the movable long-focus camera, but when the movable camera moves, the view angle of the electronic equipment can be changed; for example, the initial position of the movable tele camera is shown in (a) of fig. 13, and the field angle of the electronic device is FOV1 at this time, as shown in (b) of fig. 13; if the movable tele camera is moved to the position shown in fig. 13 (c), the field angle of the electronic device is FOV2, as shown in fig. 13 (d). As can be seen, if the movable tele camera in the electronic device moves, the angle of view of the camera in the electronic device will change, and there may be a problem of image rotation; at this time, if the electronic device is shooting a video, an image jump exists between image frames in the video, and a relatively stable video cannot be generated.
At present, when a user shoots a long-distance object, a long-focus camera is generally needed, however, the angle of view of the long-focus camera has a certain limitation, and the shooting needs of the user in different scenes cannot be met. To overcome this limitation, flexible movement of the camera is required; meanwhile, after the camera moves, if the image anti-shake processing is performed by adopting the existing anti-shake method, the image anti-shake effect is poor.
In view of this, an embodiment of the present application provides an image anti-shake method and an electronic device, where the electronic device includes a first camera, and the first camera is a movable camera, and the method includes: starting a camera application program; acquiring a first image stream acquired by a first camera, wherein the first image stream comprises multi-frame images; based on any two adjacent frames of images in the first image stream, obtaining global translation quantity, wherein the global translation quantity is used for representing the translation quantity of the center points of the two adjacent frames of images; obtaining a plurality of panning gestures based on global translation amount and time difference information, wherein the plurality of panning gestures correspond to the multi-frame images one by one, one panning gesture in the plurality of panning gestures is used for representing the gesture of the electronic equipment for acquiring one frame of image in the multi-frame images, and the time difference information is used for representing the time difference for acquiring the multi-frame images in the first image stream; obtaining a plurality of translation compensation amounts based on the plurality of translation postures, wherein the plurality of translation compensation amounts are used for representing pose differences between the plurality of translation postures and the smoothed translation postures, and the plurality of translation compensation amounts are in one-to-one correspondence with the multi-frame images; obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream; the second image stream is displayed. According to the scheme, the translation pose of the electronic equipment when the image stream is acquired can be obtained according to the difference of the image content in the image stream, so that the motion quantity of the electronic equipment is obtained; therefore, in the scheme of the application, under the condition that the camera in the electronic equipment realizes flexible movement, the image anti-shake processing can be performed through the image content of the image stream acquired by the movable camera, so that the anti-shake effect of the image is improved; in addition, the scheme does not need to acquire data in a gyroscope sensor in the electronic equipment; therefore, the scheme of the application does not depend on a gyroscope sensor in the electronic equipment, and the application scene of the image anti-shake method can be improved to a certain extent.
The following describes in detail schematic flowcharts of the image anti-shake method provided in the embodiment of the present application with reference to fig. 14 to 23.
Fig. 14 is a schematic flowchart of an image anti-shake method provided in an embodiment of the present application. The method 200 may be performed by the electronic device shown in fig. 1; the method 200 includes S210 to S260, and S210 to S260 are described in detail below, respectively.
S210, starting the movable camera.
Illustratively, the movable camera may comprise a movable tele camera; the structure and working principle of the movable tele camera are shown in fig. 4 to 11, and are not repeated here.
S220, acquiring a Raw image stream.
Illustratively, the Raw image stream may refer to a Raw image stream acquired by a movable camera; the Raw image stream may refer to an image stream of the Raw domain; or, an image stream of Raw color space.
S230, performing image processing on the Raw image stream to obtain a YUV image stream.
For example, the image processing may include a correlation algorithm that converts a Raw image acquired by the movable camera into a YUV image; the present application does not set any limitation on the algorithm.
For example, image processing may refer to an image processing algorithm performed in ISP that converts a Raw image to a YUV image.
In the scheme of the application, due to the movement of the movable camera, the image rotation problem can be caused in the acquired image stream; in one implementation, the YUV image stream may be subjected to the image rotation correction process first, and then the EIS process is performed on the image rotation corrected image stream, as shown in fig. 15; in another implementation, the image rotation correction process and the EIS process may be performed on the YUV image stream at the same time; for example, an image correction matrix can be obtained according to the difference between the image rotation correction matrix and the content of two adjacent frames of images in the YUV image stream; the image in the YUV image stream is subjected to image rotation correction processing and EIS processing by the image correction matrix, as shown in fig. 19.
S240, obtaining a plurality of image correction matrixes based on the image content of each two adjacent YUV images in the YUV image stream.
In the embodiment of the application, the image correction matrix can be obtained through the image content difference of two adjacent frames of images in the image stream; the image correction matrix is used for carrying out anti-shake processing on the image frames in the image stream; according to the method, data of a gyroscope sensor in the electronic equipment can be not required to be acquired, and image anti-shake processing can be achieved according to differences of image contents of two adjacent frames of images in the acquired image stream.
Alternatively, it is assumed that the YUV image stream includes N frames of images; the image correction matrix 1 can be obtained according to the image content of the 1 st frame image and the image content of the 2 nd frame image; obtaining an image correction matrix 2 according to the image content of the 2 nd frame image and the image content of the 3 rd frame image; similarly, an image correction matrix N-1 is obtained according to the N-2 frame image and the N-1 frame image; and obtaining an image correction matrix N according to the image content of the N-1 frame image and the image content of the N frame image.
S250, carrying out EIS processing on the YUV image stream based on a plurality of image correction matrixes to obtain the processed image stream.
Illustratively, suppose that a YUV image stream includes N frames of images therein; n image correction matrices can be obtained according to S240; the image correction matrix 1 can be applied to the 2 nd frame image to obtain the processed 2 nd frame image; the image correction matrix 2 is acted on the 3 rd frame image to obtain the processed 3 rd frame image; similarly, the image correction matrix N is acted on the N frame image to obtain a processed N frame image; and obtaining a processed image stream according to the 1 st frame image, the processed 2 nd frame image and the processed 3 rd frame image to the processed N th frame image.
And S260, displaying the processed image stream.
For example, the processed image stream may be displayed in an electronic device.
Alternatively, in one implementation, the processed image stream may be saved and the electronic device may display the processed image stream upon detection of an operation to view the image stream.
In the embodiment of the application, the electronic equipment comprises the movable camera, and the problem that the angle of view of the long-focus camera is limited can be solved by the movement of the movable camera, so that the requirements of different shooting scenes are met; under the condition that the camera in the electronic equipment flexibly moves, the image anti-shake processing of the image stream can be realized according to the image content of the image stream acquired by the movable camera through the scheme of the application; in the scheme of the application, the data in the gyroscope sensor in the electronic equipment is not required to be acquired, and the motion quantity of the electronic equipment is acquired based on the data of the gyroscope sensor; according to the scheme, the translation pose of the electronic equipment when the image stream is acquired can be obtained according to the difference of the image content in the image stream, so that the motion quantity of the electronic equipment is obtained; therefore, in the scheme of the application, under the condition that the camera in the electronic equipment realizes flexible movement, the image anti-shake processing can be performed through the image content of the image stream acquired by the movable camera, so that the anti-shake effect of the image is improved; in addition, the scheme does not need to acquire data in a gyroscope sensor in the electronic equipment; therefore, the scheme of the application does not depend on a gyroscope sensor in the electronic equipment, and the application scene of the image anti-shake method can be improved to a certain extent.
Implementation one
In one implementation, the YUV image stream may be subjected to image rotation correction to obtain an image rotation corrected image stream; then, according to the image content difference of the adjacent image frames in the image stream corrected by the image rotation, carrying out EIS processing to obtain a processed image stream; as shown in fig. 15.
Fig. 15 is a schematic flowchart of an image anti-shake method provided in an embodiment of the present application. The method 300 may be performed by the electronic device shown in fig. 1; the method 300 includes S301 to S309, and S301 to S309 are described in detail below, respectively.
S301, a movable camera collects Raw image streams.
The movable camera comprises a movable tele camera; the Raw image stream may refer to Raw image data acquired by a mobile camera.
It should be understood that the Raw image stream is Raw data collected by the mobile camera; more image information is included in the original data.
It should also be appreciated that there is some offset between the center of the lens of the moveable camera and the imaging center, before and after movement of the moveable camera.
Illustratively, the movable camera may be a movable tele camera, as shown in fig. 4; the working principle of the movable tele camera is shown in fig. 5 to 11; and will not be described in detail herein.
S302, performing front-end processing on the Raw image stream to obtain a YUV image stream.
Alternatively, the front-end processing may include a correlation algorithm that converts the acquired Raw image into a YUV image; the present application does not set any limitation on the algorithm.
By way of example, the front-end processing may refer to an image processing algorithm that converts a Raw image to a YUV image as performed in ISP.
S303, performing image rotation correction processing on the YUV image stream based on the image rotation correction matrix to obtain an image rotation correction image stream.
Illustratively, the image frames in the image stream before the image rotation correction processing are as shown in (b) in fig. 12; the image frames in the image stream after the image rotation correction processing are as shown in fig. 12 (a).
Alternatively, the image rotation correction matrix may be obtained according to the rotation amount of the movable camera and a position vector constructed in advance.
It can be understood that the movement of the prism is driven by a rotary motor, and the position of the rotary motor has a corresponding relationship with the angle of the movement of the prism. The rotation motor is driven by the motor, and when the motor drives the rotation motor to drive the prism to move, the motor outputs digital signals, namely, each digital signal can identify a position of the rotation motor, for convenience of explanation, the digital signal output by the motor is called a scan code (the current position of the rotation motor is identified by a New scan code), and because the position of the rotation motor has a corresponding relation with the angle of the prism movement, the scan code has a corresponding relation with the angle of the prism movement because the position of the rotation motor has a corresponding relation.
Accordingly, after determining the rotation vector and the translation vector of each shooting point relative to the initial position, the scan code corresponding to each shooting point, and the determined rotation vector and translation vector of the shooting point relative to the initial position may be stored, so as to obtain a position vector table in each embodiment of the present application, where a form of the position vector table may be shown in table 1, for example.
TABLE 1
Shooting point scan code Rotation vector Translation vector
Point1 Code1 R1 T1
Point2 Code2 R2 T2
Point3 Code3 R3 T3
Point4 Code4 R4 T4
Point5 Code5 R5 T5
... ... ... ...
Point24 Code24 R24 T24
It should be understood that table 1 is only an example listed for better understanding of the technical solution of the present embodiment, and is not the only limitation of the present embodiment.
It should be noted that, after the movable camera rotates, the angle of view of the image collected by the movable camera changes, and the correction matrix used for processing the image collected by the movable camera changes due to the rotation of the movable camera.
Alternatively, the above is an example of the disclination correction process; any existing image rotation correction processing method may be used, and this application is not limited in any way.
And S304, detecting and matching the characteristic points of two adjacent frames of images in the image rotation correction image stream to obtain characteristic point pairs.
For example, for a first frame image and a second frame image in two adjacent frame images, image feature point detection may be performed respectively; pairing the characteristic points in the two frames of images to obtain characteristic point pairs; the feature points may refer to key points in the image, and represent feature point pairs as shown in fig. 16 (a) and 16 (b).
Alternatively, the image feature point detection may use any existing image feature point detection algorithm, which is not limited in this application.
S305, obtaining the global translation amount of one frame of image based on the translation amount between the characteristic point pairs.
Illustratively, a first frame image of two adjacent frame images in the image stream includes feature points P11 to P1N, a second frame image includes feature points P12 to P2N, and the feature points P11 to P1N and the feature points P12 to P2N are in one-to-one correspondence; for example, from the amount of translation between the feature point P11 and the feature point P12, the amount of translation 1 can be obtained; according to the translation amount between the characteristic point P12 and the characteristic point P22, the translation amount 2 can be obtained; similarly, from the amount of translation between the feature point P1N and the feature point P2N, the amount of translation N can be obtained; averaging the N translation amounts, wherein the obtained translation amount is a global translation amount; the global translation amount may refer to a global translation vector of the second frame image with respect to the first frame image with reference to the first frame image.
Optionally, the image stream includes N frames of images, and S304 and S305 may be indicated for any two adjacent frames of images in the N frames of images, so as to obtain N-1 global translation amounts; for example, assume that 5 frames of images are included in an image stream; according to the 1 st frame image and the 2 nd frame image, a global translation vector 1 can be obtained; the global translation vector 1 is used for representing the global translation amount of the 2 nd frame image relative to the 1 st frame image; according to the 2 nd frame image and the 3 rd frame image, a global translation vector 2 can be obtained; the global translation vector 2 is used for representing the global translation amount of the 3 rd frame image relative to the 2 nd frame image; according to the 3 rd frame image and the 4 th frame image, a global translation vector 3 can be obtained; the global translation vector 3 is used for representing the global translation amount of the 4 th frame image relative to the 3 rd frame image; according to the 4 th frame image and the 5 th frame image, a global translation vector 4 can be obtained; the global translation vector 4 is used to represent the global translation amount of the 5 th frame image with respect to the 4 th frame image.
It should be understood that the above description is exemplified by including 5 frames of images in the image stream, and the number of images in the image stream is not limited in any way in the present application.
S306, obtaining a plurality of panning gestures based on the initial gestures, the global translation amount and the time difference information.
The initial position may refer to a pose of the electronic device for collecting a first frame image in the YUV image stream; the global translation amount can be used for representing the translation amount of the center points of two adjacent frames of images; according to the pose of the acquired first frame image, the global translation amount and time difference information of the multi-frame image can be obtained, and the translation pose corresponding to each frame image in the YUV image stream acquired by the electronic equipment can be regarded as the translation pose before optimization.
Optionally, smoothing may be performed on the plurality of leveling gestures to obtain a virtual leveling gesture; the virtual panning gestures may refer to the smoothed panning gestures; the smoothed translation pose may be considered an optimized translation pose.
For example, the smoothed virtual flat poses corresponding to the plurality of flat poses may be determined by an estimation algorithm and a path planning algorithm of the flat poses.
It should be understood that, in the embodiment of the present application, smoothing the plurality of panning gestures may ensure that, when the EIS processing is performed on the images in the image stream, the motion amount between two adjacent frames of the image in the image stream is ensured to be relatively stable, that is, the stability of the images in the image stream is ensured.
For example, the image stream 1 in fig. 17 represents an image stream obtained from the global shift amount before the smoothing process; image stream 2 represents an image stream obtained from a virtual global translation amount of a plurality of frame images; here, the image stream 1 may be regarded as an image stream before smoothing the global shift amount, and the image stream 2 may be regarded as an image stream after smoothing the global shift amount.
S307, based on the difference value between the plurality of horizontal shift postures and the processed virtual horizontal shift postures, the horizontal shift compensation quantity of the multi-frame images in the image stream is obtained.
Illustratively, as shown in fig. 17, the image stream 1 and the image stream 2 include i-frame images; the image flow 1 represents an image flow acquired by the translation pose of the electronic equipment before optimization; the image stream 2 represents the image stream acquired by the electronic equipment in the optimized virtual leveling gesture; for example, according to the global translation amount between the 1 st frame image and the 2 nd frame image and the time difference (for example, 33.33 ms) between the two frame images, the translation pose of the 2 nd frame image is obtained; similarly, the translation pose of each frame of image acquired by the electronic equipment can be obtained, and the translation pose before optimization is obtained; the method comprises the steps that the flat shift gesture before optimization can be subjected to smooth processing through a path constraint algorithm, and an optimized virtual flat shift gesture is obtained; wherein the optimized virtual translation pose corresponds to the translation pose before optimization one by one; the translation compensation quantity of a frame of image can be obtained through a translation posture before optimization and a virtual translation posture after optimization; for example, from the difference between the feature point in the 1 st frame image in the image stream 1 and the feature point in the 1 st frame image in the image stream 2, the translational compensation amount 1 (Δl1) can be obtained; according to the difference between the characteristic points in the 2 nd frame image in the image stream 1 and the characteristic points in the 2 nd frame image in the image stream 2, a translation compensation amount 2 (delta L2) can be obtained; according to the difference between the characteristic point in the 3 rd frame image in the image stream 1 and the characteristic point in the 3 rd frame image in the image stream 2, a translation compensation amount 3 (delta L3) can be obtained; similarly, the shift compensation amount i is obtained from the difference between the feature point in the i-th frame image in the image stream 1 and the feature point in the i-th frame image in the image stream 2.
S308, carrying out EIS processing on the image rotation correction image stream based on the translation compensation quantity of the multi-frame images in the image stream to obtain the processed image stream.
In one implementation, the frame of image rotation correction image can be compensated according to the translation compensation amount of any frame of image in the multi-frame image, so as to obtain a processed image.
In another implementation, the EIS processing may be performed on the image rotation corrected image by the grid points according to the translational compensation amount, resulting in a processed image.
In the embodiment of the application, the pixel points in the image can be processed by adopting the interpolation algorithm with different weights based on the preset position by adopting the grid points, so that the calculation amount can be effectively reduced to a certain extent; in addition, if the images in the image stream are acquired in a line-by-line exposure mode, the problem of Rolling router may also exist due to the difference of exposure time of different lines in the images; EIS processing is performed on the image rotation correction image according to the translation compensation quantity through the grid points, so that Rolling shift correction in the image can be realized.
Illustratively, as shown in fig. 18, the grid points before deformation can be made to become grid points after deformation by the image rotation correction matrix; the grid point displacement between the grid points before deformation and the grid points after deformation can be compensated by the translation compensation quantity; and the mapping from the input image to the output image is realized according to the deformed grid points, so that the processed output image is obtained. Fig. 18 (a) shows grid points corresponding to the first frame image, and fig. 18 (b) shows grid points corresponding to the second frame image; wherein O1 represents the center point of the grid point of the first frame image, and O2 represents the center point of the grid point of the second frame image; the global translation amount can represent the offset between the center point O1 of the 1 st frame image and the center point O2 of the 2 nd frame image, and by adopting grid points, the translation amount between the center point of the previous frame image and the center point of the next frame image can be obtained through an interpolation algorithm according to the translation amount and the time difference value between the center point of the previous frame image and the center point of the next frame image; for example, if a certain area in the image is located in a grid point area, the translation compensation amount of the pixel point position can be obtained through weighting coefficients based on four vertexes of the grid point; since only the translation compensation amount of the vertex position can be calculated through the grid points, the translation compensation amount of the pixels in the image can be obtained through an interpolation method, and the calculated amount can be reduced to a certain extent.
S309, displaying the processed image stream.
It should be appreciated that the processed image stream may be a relatively stable image stream generated after EIS processing.
For example, the processed image stream may be displayed in an electronic device.
Alternatively, in one implementation, the processed image stream may be saved and the electronic device may display the processed image stream upon detection of an operation to view the image stream.
In the embodiment of the application, the electronic equipment comprises the movable camera, and the problem that the angle of view of the long-focus camera is limited can be solved by the movement of the movable camera, so that the requirements of different shooting scenes are met; under the condition that the camera in the electronic equipment flexibly moves, the image anti-shake processing of the image stream can be realized according to the image content of the image stream acquired by the movable camera through the scheme of the application; in the scheme of the application, the data in the gyroscope sensor in the electronic equipment is not required to be acquired, and the motion quantity of the electronic equipment is acquired based on the data of the gyroscope sensor; according to the scheme, the translation pose of the electronic equipment when the image stream is acquired can be obtained according to the difference of the image content in the image stream, so that the motion quantity of the electronic equipment is obtained; therefore, in the scheme of the application, under the condition that the camera in the electronic equipment realizes flexible movement, the image anti-shake processing can be performed through the image content of the image stream acquired by the movable camera, so that the anti-shake effect of the image is improved; in addition, the scheme does not need to acquire data in a gyroscope sensor in the electronic equipment; therefore, the scheme of the application does not depend on a gyroscope sensor in the electronic equipment, and the application scene of the image anti-shake method can be improved to a certain extent.
Implementation II
In one implementation, the YUV image stream may be subjected to image rotation correction to obtain an image rotation corrected image stream; then, EIS processing is carried out according to the image content difference of the adjacent image frames in the image stream corrected by the image rotation, and a processed image stream is obtained; as shown in fig. 19.
Fig. 19 is a schematic flowchart of an image anti-shake method according to an embodiment of the present application. The method 400 may be performed by the electronic device shown in fig. 1; the method 400 includes S401 to S409, and S401 to S409 are described in detail below, respectively.
S401, a movable camera collects Raw image streams.
Alternatively, the implementation may refer to S220 in fig. 14, or the related description of S301 in fig. 15, which is not described herein.
Illustratively, as shown in (a) of fig. 20, the electronic device acquires an N-1 th frame image while in pose 1; the electronic equipment generates motion, and an N frame image is acquired when the electronic equipment is in the pose 2; the position generated by the same characteristic point in the two frames of images of the movement of the electronic equipment is deviated; for example, due to the motion of the electronic device, the A1 point of the N-1 th frame image is shifted to the A2 point of the N-th frame image by an amount of L; optionally, the movement of the electronic device from pose 1 to pose 2 may include a rotational movement and a translational movement; wherein the translational movement of the electronic device may be represented by (b) as in fig. 20; the electronic device rotational movement is represented by (c) as in fig. 20; as shown in (b) of fig. 20, the translational movement of the electronic device generates an offset amount L1; as shown in (c) of fig. 20, the rotational movement of the electronic device generates an offset amount L2.
S402, performing front-end processing on the Raw image stream to obtain a YUV image stream.
Optionally, the implementation is described with reference to S302 in fig. 15, which is not described herein.
S403, performing image feature point detection and matching processing on two adjacent frames of images in the image stream to obtain feature point pairs.
Optionally, the implementation is described with reference to S304 in fig. 15, which is not described herein.
S404, calculating an image rotation correction matrix.
Alternatively, the image rotation correction matrix may be obtained according to the rotation amount of the movable camera and a position vector constructed in advance.
Alternatively, the implementation may be described with reference to S303 in fig. 15, which is not described herein.
And S405, carrying out coordinate mapping on the positions of the characteristic point pairs based on the image rotation correction matrix to obtain global translation.
Illustratively, an image coordinate system before the movement of the movable camera is shown in (a) in fig. 21; fig. 21 (b) shows an image coordinate system after the movable camera is moved; as shown in (b) of fig. 21, after the movable camera moves, a problem of image rotation of the image content may occur in the image; the reason for the problem of rotation is that the coordinate system changes after the movable camera moves; for example, as shown in (c) in fig. 21, the movable camera has an image rotation angle (for example, the image rotation angle is (0, θz)) before and after the movement; by the rotation angle, the image acquired by the pose as in (b) in fig. 21 can be subjected to coordinate mapping. As shown in fig. 22, the feature point P point represents a feature point of the electronic device in the image acquired by the pose as shown in (b) in fig. 21, and the feature point P point in the image is mapped according to the rotation correction matrix (for example, rotation angle) to obtain a mapped feature point P' point.
In the embodiment of the application, the image feature point detection can be performed on the image in the image stream, and then the coordinate mapping can be performed on the feature point pairs in the image stream based on the image rotation correction matrix; it can be understood that the key points in the image stream are mapped in coordinates; compared with the coordinate mapping of the pixel points of each frame of image in the image stream, the method and the device can reduce the calculated amount of the electronic equipment to a certain extent.
Optionally, the implementation is described with reference to S305 in fig. 15, which is not described herein.
Alternatively, in the embodiment of the present application, S403 may be performed first, and S404 may be performed second; alternatively, S404 may be performed first, and S403 may be performed later.
S406, obtaining a plurality of panning gestures based on the initial gestures, the global translation amount and the time difference information.
Optionally, the implementation is described with reference to S306 in fig. 15, which is not described herein.
S407, obtaining the translation compensation quantity of the multi-frame images in the image stream based on the difference value between the plurality of translation postures and the processed virtual translation postures.
Optionally, the implementation is described with reference to S305 in fig. 15, which is not described herein.
S408, carrying out EIS processing on the YUV image stream based on the translation compensation quantity of the multi-frame images in the image stream to obtain the processed image stream.
It should be understood that, in the second implementation, the correction matrix corresponding to the translational compensation amount includes image rotation correction and motion compensation; the difference between the implementation mode II and the implementation mode I is that in the implementation mode II, the image rotation correction and the motion compensation of the YUV image stream are realized through a correction matrix, and the processed image stream is obtained; it can be understood that in the second implementation manner, the YUV image stream is subjected to one warp processing according to the compensation translation amount; in the first implementation mode, image rotation correction and motion compensation are respectively realized through two correction matrixes, and a processed image stream is obtained; it can be understood that in the first implementation manner, the YUV image stream needs to be subjected to warp processing twice respectively; the first warp process is used for performing image rotation correction on the YUV image stream; the second warp process is used for performing motion compensation on the image stream subjected to image rotation correction; since the warp processing needs to traverse each pixel point in the image, the second implementation can reduce the amount of computation for performing EIS processing on the image stream to some extent as compared with the first implementation.
S409, displaying the processed image stream.
Optionally, the implementation is described with reference to S309 in fig. 15, which is not described herein.
In the embodiment of the application, the electronic equipment comprises the movable camera, and the problem that the angle of view of the long-focus camera is limited can be solved by the movement of the movable camera, so that the requirements of different shooting scenes are met; under the condition that the camera in the electronic equipment flexibly moves, the image anti-shake processing of the image stream can be realized according to the image content of the image stream acquired by the movable camera through the scheme of the application; in the scheme of the application, the data in the gyroscope sensor in the electronic equipment is not required to be acquired, and the motion quantity of the electronic equipment is acquired based on the data of the gyroscope sensor; according to the scheme, the translation pose of the electronic equipment when the image stream is acquired can be obtained according to the difference of the image content in the image stream, so that the motion quantity of the electronic equipment is obtained; therefore, in the scheme of the application, under the condition that the camera in the electronic equipment realizes flexible movement, the image anti-shake processing can be performed through the image content of the image stream acquired by the movable camera, so that the anti-shake effect of the image is improved; in addition, the scheme does not need to acquire data in a gyroscope sensor in the electronic equipment; therefore, the scheme of the application does not depend on a gyroscope sensor in the electronic equipment, and the application scene of the image anti-shake method can be improved to a certain extent.
Fig. 23 is a schematic flowchart of an image anti-shake method provided in an embodiment of the present application. The method 500 may be performed by the electronic device shown in fig. 1; the method 500 includes S510 to S570, and S510 to S570 are described in detail below, respectively.
It should be appreciated that the electronic device includes a first camera, which is a movable camera; for example, the first camera is a movable tele camera.
It should be further understood that the structural schematic diagram and the working principle of the first camera are described with reference to fig. 4 to 11, and are not repeated herein.
S510, starting a camera application program.
For example, a user may instruct an electronic device to open a camera application by clicking an icon of a "camera" application; or when the electronic equipment is in the screen locking state, the user can instruct the electronic equipment to start the camera application through a gesture of sliding rightward on the display screen of the electronic equipment. Or the electronic equipment is in a screen locking state, the screen locking interface comprises an icon of the camera application program, and the user instructs the electronic equipment to start the camera application program by clicking the icon of the camera application program. Or when the electronic equipment runs other applications, the applications have the authority of calling the camera application program; the user may instruct the electronic device to open the camera application by clicking on the corresponding control. For example, while the electronic device is running an instant messaging type application, the user may instruct the electronic device to open the camera application, etc., by selecting a control for the camera function.
It should be appreciated that the above is illustrative of the operation of opening a camera application; the camera application program can be started by voice indication operation or other operation indication electronic equipment; the present application is not limited in any way.
S520, acquiring a first image stream acquired by a first camera.
Wherein the first image stream includes a plurality of frames of images.
S530, obtaining global translation based on any two adjacent frames of images in the first image stream.
The global translation amount is used for representing the translation amount of the center points of two adjacent frames of images.
Optionally, in one implementation, obtaining the global translation amount based on any two adjacent frames of images in the first image stream includes:
performing image feature point detection and feature point matching on any two adjacent frames of images in the first image stream to obtain feature point pairs; and obtaining global translation based on the translation between the feature point pairs.
S540, obtaining a plurality of panning gestures based on the global translation amount and the time difference information.
The plurality of leveling gestures correspond to the multi-frame images one by one, one of the plurality of leveling gestures is used for representing the gesture of the electronic equipment for collecting one frame of image in the multi-frame images, and the time difference information is used for representing the time difference for collecting the multi-frame images in the first image stream.
It should be appreciated that the panning poses of the images in the first image stream may be derived based on the amount of pixel panning of the images; in the solution of the present application, the translation pose may refer to a translation amount in a camera coordinate system or a world coordinate system obtained by a translation amount in a pixel coordinate system.
In one implementation, the first image stream includes multiple frames of images, and the panning pose of the first frame of images in the multiple frames of images may be considered an initial panning pose (e.g., 0); according to the global translation amount and time difference information between the second frame image and the first frame image, obtaining the translation pose of the second frame image on the basis of the initial translation pose; similarly, according to the global translation amount and time difference information between the second frame image and the third frame image, the translation pose of the third frame image is obtained on the basis of the translation pose of the second frame image; by the method, the translation pose of each frame of image in the first image stream can be obtained according to the image content difference in the first image stream.
S550, obtaining a plurality of translation compensation amounts based on the plurality of translation postures.
The plurality of translation compensation amounts are used for representing pose differences between the plurality of translation poses and the smoothed translation poses, and the plurality of translation compensation amounts are in one-to-one correspondence with the multi-frame images.
Optionally, in one implementation, deriving the plurality of translational compensation amounts based on the plurality of translational gestures includes:
performing smoothing treatment on the plurality of translation gestures to obtain a plurality of smoothed translation gestures; and obtaining a plurality of translation compensation amounts based on the position and posture difference values between the plurality of translation positions and the plurality of smoothed translation positions.
S560, obtaining a second image stream based on the plurality of translation compensation amounts and the first image stream.
Optionally, in one implementation, obtaining the second image stream based on the plurality of translational compensation amounts and the first image stream includes:
obtaining a first correction vector based on the first motor position information, wherein the first motor position information is motor position information when the first camera acquires a first image stream, and the first correction vector is used for performing image rotation correction on the first image stream;
performing first image processing on the first image stream based on the first correction vector to obtain an image stream subjected to image rotation correction;
and performing second image processing on the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain a second image stream.
Optionally, in one implementation, performing a second image processing on the image stream after the image rotation correction based on a plurality of translation compensation amounts to obtain a second image stream, including:
And carrying out interpolation algorithm on pixel points of each image in the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain a second image stream.
For example, the foregoing implementation may be referred to the related description of the first foregoing implementation, which is not repeated herein.
Optionally, in one implementation, obtaining the second image stream based on the plurality of translational compensation amounts and the first image stream includes:
obtaining a first correction vector based on the first motor position information, wherein the first motor position information is motor position information when the first camera acquires a first image stream, and the first correction vector is used for performing image rotation correction on the first image stream; obtaining a plurality of second correction vectors based on the first correction vector and the plurality of translation compensation amounts; and performing third image processing on the first image stream based on the plurality of second correction vectors to obtain a second image stream.
Optionally, in an implementation, performing third image processing on the first image stream based on the plurality of second correction vectors to obtain a second image stream, including:
and carrying out interpolation algorithm on pixel points in a plurality of image frames in the first image stream based on a plurality of second correction vectors to obtain a second image stream.
For example, the foregoing implementation may be referred to the related description of the second foregoing implementation, which is not repeated herein.
S570, displaying the second image stream.
For example, the second image stream may be displayed directly in the electronic device.
For example, the second image stream may be saved and the electronic device may display the second image stream after detecting the operation to view the image stream.
Optionally, in one implementation, after the first camera rotates, a first offset exists between a center point of a lens of the first camera and an imaging center point.
Optionally, in one implementation, the first camera comprises a movable tele camera.
In the scheme of the application, the electronic equipment comprises the movable camera, and the problem that the angle of view of the long-focus camera is limited can be solved by the movement of the movable camera, so that the requirements of different shooting scenes are met; under the condition that the camera in the electronic equipment flexibly moves, the image anti-shake processing of the image stream can be realized according to the image content of the image stream acquired by the movable camera through the scheme of the application; in the scheme of the application, the data in the gyroscope sensor in the electronic equipment is not required to be acquired, and the motion quantity of the electronic equipment is acquired based on the data of the gyroscope sensor; according to the scheme, the translation pose of the electronic equipment when the image stream is acquired can be obtained according to the difference of the image content in the image stream, so that the motion quantity of the electronic equipment is obtained; therefore, in the scheme of the application, under the condition that the camera in the electronic equipment realizes flexible movement, the image anti-shake processing can be performed through the image content of the image stream acquired by the movable camera, so that the anti-shake effect of the image is improved; in addition, the scheme does not need to acquire data in a gyroscope sensor in the electronic equipment; therefore, the scheme of the application does not depend on a gyroscope sensor in the electronic equipment, and the application scene of the image anti-shake method can be improved to a certain extent.
It should be noted that, the data of the gyroscope sensor can only represent the rotation motion of the electronic device, but cannot represent the translation motion of the electronic device; therefore, the movement of the electronic device cannot be accurately detected by the data in the gyro sensor; however, either rotational or translational movement of the electronic device can be manifested in differences in image content; therefore, through the image anti-shake scheme of this application, can improve image anti-shake effect to a certain extent.
It should be appreciated that the above illustration is to aid one skilled in the art in understanding the embodiments of the application and is not intended to limit the embodiments of the application to the specific numerical values or the specific scenarios illustrated. It will be apparent to those skilled in the art from the foregoing description that various equivalent modifications or variations can be made, and such modifications or variations are intended to be within the scope of the embodiments of the present application.
The image anti-shake method provided in the embodiment of the present application is described in detail above with reference to fig. 1 to 23; an embodiment of the device of the present application will be described in detail below with reference to fig. 24 and 25. It should be understood that the apparatus in the embodiments of the present application may perform the methods in the embodiments of the present application, that is, specific working procedures of the following various products may refer to corresponding procedures in the embodiments of the methods.
Fig. 24 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 600 includes a processing module 610 and a display module 620; the electronic device 600 includes a first camera, which is a movable camera.
Wherein, the processing module 610 is configured to: starting a camera application program; acquiring a first image stream acquired by the first camera, wherein the first image stream comprises multi-frame images; obtaining global translation quantity based on any two adjacent frames of images in the first image stream, wherein the global translation quantity is used for representing the translation quantity of the center points of the two adjacent frames of images; obtaining a plurality of leveling postures based on the global translation amount and time difference information, wherein the plurality of leveling postures correspond to the multi-frame images one by one, one of the plurality of leveling postures is used for representing the posture of the electronic equipment for collecting one frame of image in the multi-frame images, and the time difference information is used for representing the time difference for collecting the multi-frame images in the first image stream; obtaining a plurality of translation compensation amounts based on the plurality of translation postures, wherein the plurality of translation compensation amounts are used for representing pose differences between the plurality of translation postures and the smoothed translation postures, and the plurality of translation compensation amounts are in one-to-one correspondence with the multi-frame images; obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream; the display module 620 is configured to: and displaying the second image stream.
Optionally, as an embodiment, the processing module 610 is further configured to:
obtaining a first correction vector based on first motor position information, wherein the first motor position information is motor position information when the first camera acquires the first image stream, and the first correction vector is used for performing image rotation correction on the first image stream;
performing first image processing on the first image stream based on the first correction vector to obtain an image stream subjected to image rotation correction;
and performing second image processing on the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain the second image stream.
Optionally, as an embodiment, the processing module 610 is further configured to:
and carrying out interpolation algorithm on pixel points of each image in the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain a second image stream.
Optionally, as an embodiment, the processing module 610 is further configured to:
obtaining a first correction vector based on first motor position information, wherein the first motor position information is motor position information when the first camera acquires the first image stream, and the first correction vector is used for performing image rotation correction on the first image stream;
Obtaining a plurality of second correction vectors based on the first correction vector and the plurality of translation compensation amounts;
and performing third image processing on the first image stream based on the plurality of second correction vectors to obtain the second image stream.
Optionally, as an embodiment, the processing module 610 is further configured to:
and carrying out interpolation algorithm on pixel points in the plurality of image frames in the first image stream based on the plurality of second correction vectors to obtain the second image stream.
Optionally, as an embodiment, the processing module 610 is further configured to:
performing image feature point detection and feature point matching on any two adjacent frames of images in the first image stream to obtain feature point pairs;
and obtaining the global translation amount based on the translation amount between the characteristic point pairs.
Optionally, as an embodiment, the processing module 610 is further configured to:
performing smoothing treatment on the plurality of translation gestures to obtain a plurality of smoothed translation gestures;
and obtaining the translation compensation amounts based on the position and posture difference values between the translation positions and the smoothed translation positions.
Optionally, as an embodiment, after the first camera rotates, a first offset exists between a center point of a lens of the first camera and an imaging center point.
Optionally, as an embodiment, the first camera includes a movable tele camera.
The electronic device 600 is embodied as a functional unit. The term "module" herein may be implemented in software and/or hardware, and is not specifically limited thereto.
For example, a "module" may be a software program, a hardware circuit, or a combination of both that implements the functionality described above. The hardware circuitry may include application specific integrated circuits (application specific integrated circuit, ASICs), electronic circuits, processors (e.g., shared, proprietary, or group processors, etc.) and memory for executing one or more software or firmware programs, merged logic circuits, and/or other suitable components that support the described functions.
Thus, the elements of the examples described in the embodiments of the present application can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 25 shows a schematic structural diagram of an electronic device provided in the present application. The dashed line in fig. 25 indicates that the unit or the module is optional. The electronic device 800 may be used to implement the image anti-shake method described in the above method embodiments.
The electronic device 800 includes one or more processors 801, which one or more processors 801 may support the electronic device 800 to implement the image anti-shake method in the method embodiments. The processor 801 may be a general purpose processor or a special purpose processor. For example, the processor 801 may be a central processing unit (central processing unit, CPU), digital signal processor (digital signal processor, DSP), application specific integrated circuit (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA), or other programmable logic device such as discrete gates, transistor logic, or discrete hardware components.
The processor 801 may be used to control the electronic device 800, execute software programs, and process data for the software programs. The electronic device 800 may also include a communication unit 805 to enable input (reception) and output (transmission) of signals.
For example, the electronic device 800 may be a chip, the communication unit 805 may be an input and/or output circuit of the chip, or the communication unit 805 may be a communication interface of the chip, which may be an integral part of a terminal device or other electronic device.
For another example, the electronic device 800 may be a terminal device, the communication unit 805 may be a transceiver of the terminal device, or the communication unit 805 may be a transceiver circuit of the terminal device.
Electronic device 800 may include one or more memories 802 having programs 804 stored thereon, the programs 804 being executable by processor 801 to generate instructions 803, such that processor 801 performs the methods described in the method embodiments described above in accordance with instructions 803.
Optionally, the memory 802 may also have data stored therein. Optionally, processor 801 may also read data stored in memory 802, which may be stored at the same memory address as program 804, or which may be stored at a different memory address than program 804.
The processor 801 and the memory 802 may be provided separately or may be integrated together, for example, on a System On Chip (SOC) of the terminal device.
Illustratively, the memory 802 may be used to store a related program 804 of the image anti-shake method provided in the embodiments of the present application, and the processor 801 may be used to call the related program 804 of the image anti-shake method stored in the memory 802 during video processing, to execute the image anti-shake method of the embodiments of the present application; for example, a camera application is started; acquiring a first image stream acquired by a first camera, wherein the first image stream comprises multi-frame images; based on any two adjacent frames of images in the first image stream, obtaining global translation quantity, wherein the global translation quantity is used for representing the translation quantity of the center points of the two adjacent frames of images; obtaining a plurality of panning gestures based on global translation amount and time difference information, wherein the plurality of panning gestures correspond to the multi-frame images one by one, one panning gesture in the plurality of panning gestures is used for representing the gesture of the electronic equipment for acquiring one frame of image in the multi-frame images, and the time difference information is used for representing the time difference for acquiring the multi-frame images in the first image stream; obtaining a plurality of translation compensation amounts based on the plurality of translation postures, wherein the plurality of translation compensation amounts are used for representing pose differences between the plurality of translation postures and the smoothed translation postures, and the plurality of translation compensation amounts are in one-to-one correspondence with the multi-frame images; obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream; the second image stream is displayed.
The present application also provides a computer program product which, when executed by the processor 801, implements the method of any of the method embodiments of the present application.
The computer program product may be stored in a memory 802, such as program 804, with the program 804 ultimately being converted into an executable object file that can be executed by the processor 801 via preprocessing, compiling, assembling, and linking processes.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a computer, implements the method of any of the method embodiments of the present application. The computer program may be a high-level language program or an executable object program.
Such as memory 802. The memory 802 may be volatile memory or nonvolatile memory, or the memory 802 may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working processes and technical effects of the apparatus and device described above may refer to corresponding processes and technical effects in the foregoing method embodiments, which are not described in detail herein.
In several embodiments provided in the present application, the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, some features of the method embodiments described above may be omitted, or not performed. The above-described apparatus embodiments are merely illustrative, the division of units is merely a logical function division, and there may be additional divisions in actual implementation, and multiple units or components may be combined or integrated into another system. In addition, the coupling between the elements or the coupling between the elements may be direct or indirect, including electrical, mechanical, or other forms of connection.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In addition, the terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely one association relationship describing the associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
In summary, the foregoing is merely a preferred embodiment of the technical solution of the present application, and is not intended to limit the scope of protection of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (12)

1. The image anti-shake method is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a first camera, the first camera is a movable camera, and the method comprises the following steps:
starting a camera application program;
acquiring a first image stream acquired by the first camera, wherein the first image stream comprises multi-frame images;
obtaining global translation quantity based on any two adjacent frames of images in the first image stream, wherein the global translation quantity is used for representing the translation quantity of the center points of the two adjacent frames of images;
Obtaining a plurality of leveling postures based on the global translation amount and time difference information, wherein the plurality of leveling postures correspond to the multi-frame images one by one, one of the plurality of leveling postures is used for representing the posture of the electronic equipment for collecting one frame of image in the multi-frame images, and the time difference information is used for representing the time difference for collecting the multi-frame images in the first image stream;
obtaining a plurality of translation compensation amounts based on the plurality of translation postures, wherein the plurality of translation compensation amounts are used for representing pose differences between the plurality of translation postures and the smoothed translation postures, and the plurality of translation compensation amounts are in one-to-one correspondence with the multi-frame images;
obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream;
and displaying the second image stream.
2. The image stabilization method according to claim 1, wherein the obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream includes:
obtaining a first correction vector based on first motor position information, wherein the first motor position information is motor position information when the first camera acquires the first image stream, and the first correction vector is used for performing image rotation correction on the first image stream;
Performing first image processing on the first image stream based on the first correction vector to obtain an image stream subjected to image rotation correction;
and performing second image processing on the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain the second image stream.
3. The image stabilization method according to claim 2, wherein performing a second image processing on the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain the second image stream, comprises:
and carrying out interpolation algorithm on pixel points of each image in the image stream after the image rotation correction based on the plurality of translation compensation amounts to obtain the second image stream.
4. The image stabilization method according to claim 1, wherein the obtaining a second image stream based on the plurality of translational compensation amounts and the first image stream includes:
obtaining a first correction vector based on first motor position information, wherein the first motor position information is motor position information when the first camera acquires the first image stream, and the first correction vector is used for performing image rotation correction on the first image stream;
obtaining a plurality of second correction vectors based on the first correction vector and the plurality of translation compensation amounts;
And performing third image processing on the first image stream based on the plurality of second correction vectors to obtain the second image stream.
5. The image stabilization method according to claim 4, wherein the performing third image processing on the first image stream based on the plurality of second correction vectors to obtain the second image stream includes:
and carrying out interpolation algorithm on pixel points in the plurality of image frames in the first image stream based on the plurality of second correction vectors to obtain the second image stream.
6. The image anti-shake method according to any one of claims 1 to 5, wherein the obtaining a global translation amount based on any two adjacent frames of images in the first image stream includes:
performing image feature point detection and feature point matching on any two adjacent frames of images in the first image stream to obtain feature point pairs;
and obtaining the global translation amount based on the translation amount between the characteristic point pairs.
7. The image shake prevention method according to any one of claims 1 to 6, characterized in that the obtaining a plurality of translational compensation amounts based on the plurality of translational gestures includes:
performing smoothing treatment on the plurality of translation gestures to obtain a plurality of smoothed translation gestures;
And obtaining the translation compensation amounts based on the position and posture difference values between the translation positions and the smoothed translation positions.
8. The image anti-shake method according to any one of claims 1 to 7, wherein after the first camera is rotated, there is a first offset between a center point of a lens of the first camera and an imaging center point.
9. The image anti-shake method according to any one of claims 1 to 8, characterized in that the first camera includes a movable tele camera.
10. An electronic device, characterized in that the electronic device comprises:
one or more processors, memory, and a first camera;
the memory is coupled with the one or more processors, the memory is for storing computer program code, the computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform the image anti-shake method of any of claims 1-9.
11. A chip system for application to an electronic device, the chip system comprising one or more processors configured to invoke computer instructions to cause the electronic device to perform the image anti-shake method of any of claims 1-9.
12. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when executed by a processor causes the processor to perform the image anti-shake method according to any one of claims 1 to 9.
CN202310552171.4A 2023-05-16 2023-05-16 Image anti-shake method and electronic equipment Pending CN117714867A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310552171.4A CN117714867A (en) 2023-05-16 2023-05-16 Image anti-shake method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310552171.4A CN117714867A (en) 2023-05-16 2023-05-16 Image anti-shake method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117714867A true CN117714867A (en) 2024-03-15

Family

ID=90163029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310552171.4A Pending CN117714867A (en) 2023-05-16 2023-05-16 Image anti-shake method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117714867A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020132917A1 (en) * 2018-12-26 2020-07-02 Huawei Technologies Co., Ltd. Imaging device, image stabilization device, imaging method and image stabilization method
WO2021081707A1 (en) * 2019-10-28 2021-05-06 深圳市大疆创新科技有限公司 Data processing method and apparatus, movable platform and computer-readable storage medium
CN114339102A (en) * 2020-09-29 2022-04-12 华为技术有限公司 Video recording method and device
CN115150542A (en) * 2021-03-30 2022-10-04 华为技术有限公司 Video anti-shake method and related equipment
CN115546043A (en) * 2022-03-31 2022-12-30 荣耀终端有限公司 Video processing method and related equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020132917A1 (en) * 2018-12-26 2020-07-02 Huawei Technologies Co., Ltd. Imaging device, image stabilization device, imaging method and image stabilization method
WO2021081707A1 (en) * 2019-10-28 2021-05-06 深圳市大疆创新科技有限公司 Data processing method and apparatus, movable platform and computer-readable storage medium
CN114339102A (en) * 2020-09-29 2022-04-12 华为技术有限公司 Video recording method and device
CN115150542A (en) * 2021-03-30 2022-10-04 华为技术有限公司 Video anti-shake method and related equipment
CN115546043A (en) * 2022-03-31 2022-12-30 荣耀终端有限公司 Video processing method and related equipment

Similar Documents

Publication Publication Date Title
KR102385360B1 (en) Electronic device performing image correction and operation method of thereof
CN113454982B (en) Electronic device for stabilizing image and method of operating the same
CN109194876B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
JP5450739B2 (en) Image processing apparatus and image display apparatus
CN112602111A (en) Electronic apparatus that blurs image obtained by combining a plurality of images based on depth information and method of driving the same
CN115701125B (en) Image anti-shake method and electronic equipment
CN114339102B (en) Video recording method and equipment
CN115802158B (en) Method for switching cameras and electronic equipment
CN113132612A (en) Image stabilization processing method, terminal shooting method, medium and system
US20220360714A1 (en) Camera movement control method and device
WO2023005355A1 (en) Image anti-shake method and electronic device
CN116546316B (en) Method for switching cameras and electronic equipment
CN107071277B (en) Optical drawing shooting device and method and mobile terminal
CN115908120B (en) Image processing method and electronic device
EP4228236A1 (en) Image processing method and electronic device
CN117135456B (en) Image anti-shake method and electronic equipment
CN117714867A (en) Image anti-shake method and electronic equipment
CN117135420B (en) Image synchronization method and related equipment thereof
CN117135459A (en) Image anti-shake method and electronic equipment
CN114979458A (en) Image shooting method and electronic equipment
CN115767287B (en) Image processing method and electronic equipment
CN117714863A (en) Shooting method and related equipment thereof
CN117135458A (en) Optical anti-shake method and related equipment
CN116051647B (en) Camera calibration method and electronic equipment
US11928775B2 (en) Apparatus, system, method, and non-transitory medium which map two images onto a three-dimensional object to generate a virtual image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination