WO2022267565A1 - 视频拍摄方法、电子设备及计算机可读存储介质 - Google Patents

视频拍摄方法、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022267565A1
WO2022267565A1 PCT/CN2022/080722 CN2022080722W WO2022267565A1 WO 2022267565 A1 WO2022267565 A1 WO 2022267565A1 CN 2022080722 W CN2022080722 W CN 2022080722W WO 2022267565 A1 WO2022267565 A1 WO 2022267565A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
electronic device
images
exposure
intercepted
Prior art date
Application number
PCT/CN2022/080722
Other languages
English (en)
French (fr)
Inventor
李森
王宇
朱聪超
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2022267565A1 publication Critical patent/WO2022267565A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application belongs to the field of image processing, and in particular relates to a video shooting method, electronic equipment and a computer-readable storage medium.
  • the electronic device When the electronic device shoots a video, there may be a moving object in the captured picture, or the electronic device held by the photographer may shake, causing a relative displacement between the electronic device and the shooting object. Due to the appearance of relative displacement, the exposure position of the subject on the image sensor may move within the exposure time range of generating a frame, which may cause artifacts in the captured video or motion blur in the captured video, affecting The recording quality of the video.
  • the present application provides a video shooting method, an electronic device, and a computer-readable storage medium, which can reduce artifacts in the video or reduce motion blur in the video when the electronic device is shooting a video, and improve the shooting quality of the video.
  • the present application provides a video shooting method, the method is applied to an electronic device, and the method may include: the electronic device determines that the current shooting state is a motion state, and obtains the first exposure duration in the current shooting parameters; The electronic device performs time interception on the first exposure duration to obtain an intercepted image, the intercepted image includes images of two or more frames, and the intercepted image corresponds to the exposure duration obtained by time interception; the electronic device will The intercepted image is fused into a frame of image, and a video is generated according to the fused image.
  • the shooting state of the electronic device may include a moving state and a non-moving state (that is, a stable state).
  • time interception is performed on the acquired first exposure duration, and the first exposure duration is intercepted into two or more second exposure durations, and the intercepted two or For more than two second exposure durations, exposure and image acquisition are performed respectively to obtain two or more intercepted images.
  • the two or more intercepted images are fused into one frame image of the video, so as to generate a video with a frame rate consistent with the first exposure duration.
  • the intercepted second exposure duration is shorter than the first exposure duration.
  • the distance of relative movement within the second exposure time is smaller than the distance of relative movement within the first exposure time, therefore, in The degree of dynamic blur and artifacts in the intercepted image is smaller, so that the quality of the fused image can be effectively improved, and the artifacts and dynamic blur of the image can be reduced.
  • the shooting state may be understood as a relative state between the electronic device and the object in the shot image.
  • the shooting state of the electronic device can be determined according to the change of the pixels of the collected image, or the data collected by the sensor of the electronic device.
  • the electronic device may directly perform exposure and image acquisition through the first exposure step, and directly use the acquired image as a frame image of the video. That is, relative to the motion state, when the electronic device is in a stable state, it is not necessary to perform time interception on the first exposure duration and perform image fusion on the intercepted image.
  • the motion sensor may collect sensing data of the electronic device, and determine the current shooting state of the electronic device according to the sensing data.
  • the sensing data may include translational acceleration and angular displacement acceleration.
  • the translational acceleration and angular displacement acceleration of the electronic device can be determined by devices such as an acceleration sensor or a gyroscope.
  • the translational velocity of the electronic device can be calculated through the translational acceleration, and the angular displacement speed of the electronic device can be calculated through the angular displacement acceleration.
  • the shooting state of the electronic device can be more reliably determined according to one parameter among the translational velocity, the translational acceleration, the angular displacement velocity, and the angular displacement acceleration, or a combination of several parameters.
  • the current shooting state of the electronic device may also be determined according to changes in pixels in an image captured by the electronic device.
  • the electronic device collects two frames of images at predetermined time intervals, compares pixels in the two frames of images, and determines the number of changed pixels.
  • the degree of change of the image is reflected according to the ratio of the number of changed pixels to the total pixels of the image.
  • the ratio is greater than the preset ratio threshold, it means that the image content changes drastically, and the current shooting state of the electronic device is in a motion state, otherwise, it is in a stable state.
  • the magnitude of the ratio threshold may be associated with the time interval of the two frames of images being compared. As the time interval increases, the scale threshold can be correspondingly increased. When the two frames of images are adjacent images in the video, the ratio threshold may be determined according to the frame rate of the video.
  • the pixels of the two frames of images may be compared one by one. Since the electronic device itself may have translation or angular displacement when there are two frames of images, in order to improve the accuracy of pixel comparison, the images to be compared can be registered first, and the similarity of the registered images can be performed. Compare to improve the precision of pixel comparisons.
  • a manner of changing the sharpness information of the image collected by the electronic device may also be included.
  • the electronic device can acquire two frames of images captured within a predetermined time interval; perform edge detection on the two frames of images, and determine the sharpness of the edges of the two frames of images The area where the sharpness changes; if the ratio of the area where the sharpness changes to the edge area is greater than or equal to the predetermined edge ratio threshold, the shooting state of the electronic device is a fixed state; if the ratio of the area where the sharpness changes to the edge area If it is smaller than the predetermined edge ratio threshold, the shooting state of the electronic device is a stable state.
  • the electronic device When the electronic device shoots a relatively moving object, the overall picture shifts, resulting in blurring of the outline of the object in the picture, thereby reducing the sharpness of the outline of the object.
  • the sharpness information Through the comparison of the sharpness information, it can be detected that the electronic device is currently in an unstable state (or a state combining a stable state and a motion state), and the continuous trend of the shooting state in a period of time in the future can be predicted according to the detection result. The trend determines how long the electronics should be exposed to produce a video with better image quality.
  • the first exposure duration is intercepted to generate two or more second exposure durations, and two or more intercepted images can be generated according to the second exposure duration.
  • two or more intercepted images can be generated according to the second exposure duration.
  • the electronic device may determine the motion area and the non-motion area in the intercepted image; the electronic device fuses the image of the non-motion area in the The above method intercepts the image of the moving area of the specified image in the image to generate a frame of image.
  • image quality enhancement processing may be performed according to the data of the non-moving area of the two images to obtain the fused non-moving area.
  • the motion area of the specified image in the predetermined intercepted image is directly used to combine and generate a frame of image. Since the image quality of the non-moving area is enhanced, and because the second exposure time is shorter, the image in the motion area of the intercepted image is clearer than the image of the first exposure time, so the fused image can reduce motion blur and reduce Image artifacts.
  • the electronic device may perform registration transformation on the intercepted image to obtain a reference image and a transformed image; the electronic device calculates the difference between the transformed image and the reference image Pixel difference; when the pixel difference is greater than or equal to a predetermined pixel difference threshold, the electronic device determines that the pixel point corresponding to the pixel difference belongs to the motion area; when the pixel difference is smaller than the predetermined pixel difference threshold, the electronic device The device determines that the pixel point corresponding to the pixel difference belongs to the non-moving area.
  • the pixels of the intercepted images are compared one by one to determine the area to which each pixel belongs.
  • the determined motion area can also be filtered and screened to improve optimization efficiency.
  • one of the images in the intercepted images can be determined as a reference image, the other images in the intercepted images can be registered with the reference image, and the relationship between other images and the reference image can be determined. transformation matrix between them; performing image transformation on the other images according to the determined transformation matrix to obtain a transformed image.
  • the reference image may be an image in the middle of the intercepted image.
  • three clipped images may be included, the second clipped image may be used as a reference image, and the first clipped image and the third clipped image may be used as other images.
  • the transformation matrix of the first intercepted image and the second intercepted image perform registration transformation on the first intercepted image, and perform registration transformation on the third intercepted image according to the transformation matrix between the third intercepted image and the second image registration transformation.
  • any intercepted image may be used as a reference image, and the transformed image may be used for comparison, or the transformed image may be compared with the reference image.
  • filtering processing may also be performed on the motion region in the generated frame of image to obtain a filtered image. Therefore, the image quality of the motion region of the generated frame of image can be further improved.
  • the embodiment of the present application provides an electronic device, the electronic device includes a camera for collecting images; a display screen for displaying the collected images; one or more processors; memory; and one or more a computer program, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform The video shooting method as described in the first aspect.
  • the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium includes computer instructions, and when the computer instructions are run on a computer, the computer executes the computer-readable storage medium described in the first aspect.
  • FIG. 1 is a schematic diagram of a camera system provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of image acquisition in a non-overlapping mode provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of image acquisition in an overlapping mode provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an implementation scene of a video shooting method provided in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an implementation scene of another video shooting method provided in the embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a software structure provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of the operation process of the photographing process of an electronic device between the hardware abstraction layer and the framework layer provided by the embodiment of the present application;
  • FIG. 9 is a schematic diagram of an implementation flow of a video shooting method provided in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the relationship between the first exposure duration and the first exposure duration provided by the embodiment of the present application.
  • Fig. 11 is a schematic diagram of the relationship between another first exposure duration and the first exposure duration provided by the embodiment of the present application.
  • Fig. 12 is a schematic diagram of the relationship between another first exposure duration and the first exposure duration provided by the embodiment of the present application.
  • Fig. 13 is a schematic diagram of the relationship between another first exposure duration and the first exposure duration provided by the embodiment of the present application.
  • FIG. 14 is a schematic diagram of a shooting mode switching operation of an electronic device provided in an embodiment of the present application.
  • FIG. 15 is a schematic diagram of a video shooting process provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of image fusion provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of modular division of a mobile phone provided by an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Camera system or also called camera system.
  • FIG. 1 is a schematic diagram of a camera system provided by an embodiment of the present application.
  • the camera system may include a lens (Lens) module 11, an image sensing (Sensor) module 12, an image signal processing (Image Signal Processor, ISP) module 13 and a coding output module 14.
  • the lens module 11 is used for transforming the light beam of the imaging target to the photosensitive surface of the image sensing module 12 through light beam transformation.
  • the control parameters affecting beam transformation include lens focal length (Focal Length), aperture (Iris), depth of field (Depth of Field, DOF), exposure time, sensitivity and other parameters.
  • the image sensing module 12 is used to convert the optical signal transformed by the lens module 11 into an electrical signal through exposure, and output the original image by reading the register of the exposed sensor. According to the control parameters of the lens module 11, including parameters such as exposure time, aperture size, sensitivity, etc., the quality of the output original image can be adjusted, and the original image can be output, such as an original image to image in Bayer (Bayer array) format.
  • Signal processing module 13 is used to convert the optical signal transformed by the lens module 11 into an electrical signal through exposure, and output the original image by reading the register of the exposed sensor. According to the control parameters of the lens module 11, including parameters such as exposure time, aperture size, sensitivity, etc., the quality of the output original image can be adjusted, and the original image can be output, such as an original image to image in Bayer (Bayer array) format.
  • Signal processing module 13 is used to convert the optical signal transformed by the lens module 11 into an electrical signal through exposure, and output the original image by reading the register of the exposed sensor. According to the control parameters of the lens
  • the image signal processing module 13 is used to perform image processing on the original image, including but not limited to eliminating bad pixels, improving saturation, improving edge smoothness, improving photo clarity, improving preview clarity, and the like.
  • the coded output module 14 can be used to code the image output by the image signal processing module 13, output a video with a target frame rate, or output photos in other formats.
  • Reading data is the process of reading data from the register of the image sensing module after the exposure is completed.
  • the image acquisition process includes two common methods, namely overlapping (overlapped) mode and non-overlapping (non-overlapped) mode.
  • FIG. 2 is a schematic diagram of image acquisition in a non-overlapping mode provided by an embodiment of the present application. As shown in FIG. 2 , before each image acquisition period starts, the camera has completed the process of reading out the data of the previous image acquisition period.
  • the exposure of the image of the Nth frame is completed. After the exposure is completed, the image data registered in the register of the sensor is read out. Afterwards, the exposure of the N+1th frame image is started, and after the N+1th frame image exposure is completed, the N+1th frame image data registered in the register of the sensor is read out.
  • FIG. 3 is a schematic diagram of image acquisition in an overlapping mode provided by an embodiment of the present application. As shown in Figure 3, when the camera reads out data, it may overlap with the exposure time of the next frame of image. At the same moment, the camera performs two operations, that is, the readout of the N frame image and the exposure of the N+1 frame image. Since the camera performs more operations in the same amount of time, more images can be captured in overlay mode.
  • Motion blur may also be referred to as motion blur, and refers to obvious blurred traces in a captured image caused by fast-moving objects included in the image.
  • the reason for motion blur is that when a camera shoots a video, due to technical limitations, the captured image does not represent an instant image at a single moment, but a scene within a period of time.
  • the image of the scene will represent the combination of all positions of the object during the exposure time and the camera's perspective. In such images, objects that move relative to the camera will appear blurred or shaky.
  • a video shooting method provided in an embodiment of the present application may be applied to an electronic device.
  • the display screen can display a preview image in real time.
  • the display screen can still clearly display the moving image, reducing the artifacts caused by the shaking of the electronic device or the moving object included in the captured image. shadow image.
  • FIG. 4 is a schematic diagram of a scene of a video shooting method provided by an embodiment of the present application.
  • the electronic device when the electronic device is held by the user and is shooting a video, due to the movement of the user itself, the angle of view or distance of the electronic device relative to the shooting object will change. Or, due to the shaking generated when the user walks, including the bumps generated when the user walks, or the movement caused by the instability of the user's arm, the electronic device will cause a relative displacement of the electronic device relative to the shooting object during the shooting process. .
  • an object with relative displacement may record multiple positions of the object in the same image through the image sensing module, resulting in motion blur.
  • the user when a user uses an electronic device for live broadcasting outdoors, or when a user makes a video call with a good friend while walking, the user holds the electronic device through the electronic device holder.
  • the rapid displacement of the user will cause a relative displacement between the electronic device and the photographed object.
  • the shaking caused by the walking of the user will also cause the relative displacement of the electronic device to shake up and down relative to the photographed object.
  • the arm holding the electronic device may also cause the electronic device to shake or rotate relative to the subject.
  • FIG. 5 is a schematic diagram of a scene of another video shooting method provided by an embodiment of the present application.
  • the scene captured includes a background and a moving target.
  • the background is in a static state relative to the electronic device, and the moving target is in a moving state relative to the electronic device. Therefore, within the same exposure time, the position of the background in the image generated after exposure does not change.
  • the moving target may record multiple positions of the moving target in the image during the exposure process, thus in the generated image. In the image, the motion blur of the moving object is generated due to the recording of multiple positions of the moving object within the exposure time.
  • the moving target includes a ball moving at high speed.
  • the sphere moves from position A to position B.
  • the positions of the sphere recorded through the exposure include multiple positions, that is, multiple positions during the process of the sphere moving from position A to position B.
  • the image displayed on the display screen of the electronic device may include a clear background image and a sphere that produces motion blur.
  • a user is using an electronic device to shoot moving cars, pedestrians, or a game. Even if the electronic equipment has been stably fixed on the ground or other stable equipment through brackets, since the shooting objects include moving cars, pedestrians or fast-moving athletes, within the same exposure time, moving shooting objects will appear in the same frame of image multiple positions in the image, resulting in motion blur in the captured image.
  • Image restoration algorithms include non-blind image restoration algorithms and blind image restoration algorithms.
  • non-blind image restoration algorithms include inverse filter restoration algorithm, Wiener filter restoration algorithm, constrained least squares restoration method and RL (reinforcement learning) iterative algorithm, etc.
  • blind image restoration algorithms include cepstrum method, iterative optimization algorithm and Neural Network Restoration Algorithms, etc.
  • an embodiment of the present application provides a video shooting method, and the method of the embodiment of the present application may be applied to a scene where a shooting target includes a moving target, or a shooting electronic device and the shooting target move relatively.
  • a video shooting method provided by the embodiment of the present application, clear images can be efficiently generated in a scene where the shooting target includes a moving target, or the electronic device for shooting and the shooting target move relative to each other.
  • the electronic device when live broadcasting outdoors or while walking, or if the captured image includes a moving target, when the electronic device detects that the currently captured image has motion blur, it can change the exposure time to the time when the motion blur does not appear.
  • the first exposure duration is intercepted to obtain two or more exposure durations.
  • the intercepted images corresponding to the intercepted exposure durations can be read out respectively.
  • the read-out intercepted images are fused into one frame of images, so that the frame rate of the generated video is consistent with the frame rate of the video without motion blur.
  • Two or more exposure durations are obtained by intercepting the first exposure duration. Therefore, the intercepted exposure duration is shorter than the first exposure duration.
  • the first exposure duration is divided into two uniform exposure durations, and the divided exposure duration is only half of the first exposure duration. If the first exposure duration is divided into three uniform exposure durations, the divided exposure duration is only one-third of the first exposure duration.
  • the divided exposure duration is N/N of the first exposure duration.
  • the sum of the exposure duration obtained by the above division is the same as the first exposure duration.
  • the divided exposure time may be shorter than the first exposure time. That is, when dividing the first exposure duration, part of the time period may be cut out, and the cut-out exposure durations include a preset time interval.
  • the intercepted exposure durations may also be different durations.
  • the intercepted images may be divided into regions first.
  • the intercepted image may be divided into a moving area and a non-moving area, and fusion processing is performed according to different fusion methods according to different divided areas.
  • the moving area is an area including a moving target, and other areas in the image other than the moving area are non-moving areas.
  • the non-moving areas of multiple intercepted images can be fused through fusion methods such as Alpha fusion and multi-band fusion to obtain a fused image of the non-moving areas.
  • any one of the generated clipped images may be selected to determine the image of the moving area.
  • the determined image of the moving area is fused with the fused image of the non-moving area to obtain a frame of image. Since the non-moving regions of the multi-frame intercepted images are fused, if the non-moving regions of different frames include different image qualities, clearer images of the non-moving regions can be obtained through fusion.
  • the motion area the motion area of the intercepted image of one frame is selected. Compared with the image of the motion area in the first exposure time length, since the exposure time is shorter, the motion blur of the obtained image of the motion area is smaller, The image is clearer.
  • filtering processing may also be performed on the image of the motion region.
  • edge-preserving filter-guided filtering or bidirectional filtering can be used to reduce noise in moving areas and improve image quality in moving areas.
  • non-local mean filtering non-local means, NLM
  • Gaussian filtering can be used to reduce the noise in the moving area and improve the image quality of the moving area.
  • the first exposure duration may be the exposure duration used by the electronic device when shooting a stable picture, that is, when the image captured by the electronic device is in a stable state.
  • FIG. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • the electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, a universal serial bus (universal serial bus, USB) interface 230, a charging management module 240, a power management module 241, and a battery 242 , sensor module 280, button 290, motor 291, indicator 292, camera 293, display screen 294, etc.
  • a processor 210 an external memory interface 220, an internal memory 221, a universal serial bus (universal serial bus, USB) interface 230, a charging management module 240, a power management module 241, and a battery 242 , sensor module 280, button 290, motor 291, indicator 292, camera 293, display screen 294, etc.
  • USB universal serial bus
  • the sensor module 280 may include a pressure sensor, a gyroscope sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a temperature sensor, a touch sensor, an ambient light sensor, and the like.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 200 .
  • the electronic device 200 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 210 may include one or more processing units, for example: the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • the controller may be the nerve center and command center of the electronic device 200 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 210 for storing instructions and data.
  • the memory in processor 210 is a cache memory.
  • the memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 210 is reduced, thereby improving the efficiency of the system.
  • processor 210 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI) and/or general-purpose input/output (general-purpose input/output, GPIO) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • the interface connection relationship among the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 200 .
  • the electronic device 200 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 240 is configured to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the power management module 241 is used for connecting the battery 242 , the charging management module 240 and the processor 210 .
  • the power management module 241 receives the input of the battery 242 and/or the charging management module 240 to provide power for the processor 210 , the internal memory 221 , the external memory, the display screen 294 and the camera 293 .
  • the power management module 241 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the electronic device 200 realizes the display function through the GPU, the display screen 294 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 294 is used to display images, videos and the like.
  • Display 294 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrixorganic light-emitting diode) , AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 200 may include 1 or N display screens 294, where N is a positive integer greater than 1.
  • the electronic device 200 can realize the shooting function through the ISP, the camera 293 , the video codec, the GPU, the display screen 294 and the application processor.
  • the ISP is used for processing the data fed back by the camera 293 .
  • the shutter is opened, and the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 293 .
  • Camera 293 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 200 may include 1 or N cameras 293, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 200 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • the external memory interface 220 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 200.
  • the external memory card communicates with the processor 210 through the external memory interface 220 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 221 may be used to store computer-executable program codes including instructions.
  • the processor 210 executes various functional applications and data processing of the electronic device 200 by executing instructions stored in the internal memory 221 .
  • the internal memory 221 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 200 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 221 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the keys 290 include a power key, a volume key and the like.
  • the key 290 may be a mechanical key. It can also be a touch button.
  • the electronic device 200 may receive key input and generate key signal input related to user settings and function control of the electronic device 200 .
  • the motor 291 can generate a vibrating reminder.
  • the motor 291 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 291 can also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 294 .
  • the indicator 292 can be an indicator light, which can be used to indicate the charging status, the change of the battery capacity, and also can be used to indicate messages, missed calls, notifications and so on.
  • the software system of the electronic device 200 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the operating system of the electronic device may include but not limited to (Symbian), (Android), (iOS), (Blackberry), Hongmeng (Harmony) and other operating systems, this application is not limited.
  • the embodiment of the present application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 200 .
  • FIG. 7 is a block diagram of the software structure of the electronic device 200 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into five layers, which are application program layer, application program framework layer, Android runtime (Android runtime) and system library, hardware abstraction layer and driver layer from top to bottom.
  • the application layer can consist of a series of application packages.
  • the application package may include application programs such as camera, gallery, calendar, phone, map, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include an interface corresponding to a camera application, a window manager, a content provider, a view system, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the Android Runtime includes core library and virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • a system library can include multiple function modules. For example: camera service, media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • Media Libraries Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the workflow of the electronic device 200 will be described as an example in combination with the video shooting method provided in this application. Taking the schematic structural diagram of the system shown in FIG. 6 as an example, the process of video shooting by the electronic device 200 is described.
  • the hardware abstraction layer includes camera components (Camera Device3), image pipeline mode components (Image Pipeline) and software application components (Image Stream Callback MGR), and the driver layer includes image sensors (sensors), image processing unit front-end nodes (Front End, ISP-FE), image processing unit back-end node (BackEnd, ISP-BE), etc.
  • the camera application in the application layer can be displayed as an icon on the screen of the electronic device.
  • the electronic device runs the camera application.
  • the camera application runs on the electronic device, and the electronic device can send corresponding touch events to the kernel layer according to user operations.
  • the kernel layer converts touch events into raw input events, and the raw input events are stored in the kernel layer. In this way, when the touch screen receives a touch event, the camera application is started, and then the camera is started by calling the kernel layer.
  • the electronic device is in the video recording mode of the camera application.
  • the image pipeline mode component Image Pipeline includes a zero-latency processor (zero shutter lag Manager, ZSL Manager), FE node (Node), BE-Streaming back-end image stream node (Node), BE-Snapshot Terminal image snapshot node (Node), Internet Protocol Suite (Internet Protocol Suite, IPS) (or understood as the pipeline filtering model in the pipeline mode), the memory carrying the platform algorithm, etc.
  • a zero-latency processor zero shutter lag Manager, ZSL Manager
  • FE node Node
  • BE-Streaming back-end image stream node Node
  • BE-Snapshot Terminal image snapshot node Node
  • Internet Protocol Suite Internet Protocol Suite, IPS
  • the ZSL processor is used to provide a preview image when the camera application is running, and the ZSL processor is set in the history frame holder of the raw domain image.
  • ZSL Manager can be used to manage the preview image stream obtained by ZSL, and can perform operations such as configuration, queuing, and frame selection on the preview image stream.
  • FE Node is the root node of the pipeline mode, that is, the original node of all image processing collected by electronic equipment, which can be used as the front-end processor of the hardware ISP chip, and can be used for two or more exposures obtained by dividing the first exposure duration The intercepted images corresponding to the duration are fused to obtain a fused frame of image.
  • the back-end image streaming node (BE-Streaming-Node) is used to process the preview image stream.
  • the preview image stream when the electronic device is in the recording mode and the preview image stream when the electronic device is in the recording state, etc.
  • an IPS can also be mounted, that is, the backend image stream node can access the preview image processed by the ISP. Mounting is the process by which an operating system allows an electronic device to access files on a storage device.
  • the back-end snapshot node (BE-Snapshot-Node) is used to process video-related images.
  • an IPS can also be mounted, that is, the backend snapshot node can access the snapshot image processed by the ISP.
  • IPS is a pipeline filtering model of the HAL layer. Plug-ins can be set in IPS, and plug-ins can be used to access algorithms stored in storage devices. After the IPS access algorithm can be used to take over camera preview, camera action and data in video recording mode, etc.
  • the IPS can interact with the framework layer of the camera and the HAL to implement corresponding functions.
  • Algo is an image processing algorithm module that can be mounted by IPS.
  • Algo may include an algorithm processing module, and when Algo invokes the algorithm processing module, it may access processors such as CPU, GPU, and NPU when running the algorithm.
  • the image sensor (sensor) is used to acquire images, and is used to be responsible for the power-on or power-off timing diagram of the hardware sensor, and is also used for matching control, real-time image sensor configuration and reset functions.
  • the first exposure duration used in the exposure mode of the video recording mode intercepts two or more exposure durations for exposure control, which can be realized by setting an image sensor.
  • the electronic device is in the recording mode of the camera application, the camera application calls the corresponding interface in the application framework layer, starts the camera driver by calling the kernel layer, turns on the camera of the electronic device, and collects images through the camera.
  • the exposure mode corresponding to the video recording mode includes: the first exposure mode, which performs exposure according to a preset first exposure duration; the second exposure mode, which intercepts multiple exposure durations from the first exposure duration and performs exposure respectively.
  • the camera of the electronic device exposes according to the exposure mode corresponding to the video recording mode, and the image sensor collects the image.
  • the images of two or more frames corresponding to the captured exposure time are saved in ZSL Manager.
  • the FE Node in the electronic device can process two or more frames of images in the ZSL Manager to generate a preview image stream, and the electronic device displays the preview image.
  • the shooting key (or shooting control) of the electronic device receives a trigger operation, the FE Node will fuse two or more captured images read out to generate a frame of image, and display the video corresponding to the fused image on the display screen.
  • the electronic device in the embodiment of the present application can be a mobile phone with a camera function, an action camera (GoPro), a digital camera, a tablet computer, a desktop, a laptop, a handheld computer, a notebook computer, a vehicle-mounted device, a super mobile Personal computer (ultra-mobile personal computer, UMPC), netbook, and cell phone, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, etc.
  • PDA personal digital assistant
  • augmented reality augmented reality, AR
  • VR virtual reality
  • the electronic device includes a video recording mode.
  • the electronic device can collect two or more frames of images according to the exposure duration intercepted by the first exposure duration, and fuse them to generate one frame of images for generating a video.
  • FIG. 8 it shows the process of generating a video by an electronic device.
  • the framework layer includes camera application (APP1), camera service (camera service), display synthesis system (surface flinger); hardware abstraction layer (HAL) includes camera driver 3 (camera device3), the first exposure mode, software Application stream (APP Stream), FE node (Node), BE-Streaming backend image stream node (Node) and ZSL processor.
  • the camera application is triggered, the camera application sends a camera operation request, and the mode of the camera application is a video recording mode.
  • the camera service in the framework layer is triggered and drives the camera corresponding to the camera in the hardware abstraction layer.
  • the camera driver invokes the first exposure mode or the second exposure mode for exposure.
  • the camera driver submits a photographing request to the FE node according to the invoked first exposure mode or the second exposure mode.
  • the ZSL processor includes an image queue VCO, the first exposure mode instructs the electronic device to expose according to the first exposure time, and the image collected by the electronic device according to the first exposure time forms the VCO image queue.
  • the second exposure mode instructs the electronic device to expose according to the second exposure time length, and the images collected by the electronic device according to the second exposure time length form a VCO image queue.
  • the FE node transmits the captured photo queue to the BE-Streaming backend image stream node.
  • the ZSL processor transmits the V0 image queue to the image pipeline, and the image pipeline transmits the image queue to the framework layer through the software application stream.
  • the framework layer receives feedback data from the hardware abstraction layer, and displays the image on the display screen through the display synthesis system.
  • the electronic device is a mobile phone as an example, and a camera application is installed in the mobile phone.
  • FIG. 9 is a flow chart of the photographing method provided by the embodiment of the present application. As shown in Fig. 9, the method includes S901-S902.
  • the mobile phone runs the camera application, and the mobile phone uses the video recording mode in the camera application to collect images.
  • the embodiment of the present application is taken as an example to illustrate the video shooting method provided in the embodiment of the present application.
  • the video recording mode in the camera is only corresponding to the second exposure mode to acquire images.
  • the name of the application may be different, and the form of entering the recording function may be different.
  • the live broadcast application corresponds to the live broadcast mode
  • the instant messaging application corresponds to the video call mode.
  • the mobile phone opens the video recording mode in the camera application.
  • the corresponding application may include multiple shooting modes.
  • the image effects obtained by different camera modes are different.
  • camera applications include portrait mode, night scene mode, video recording mode, etc.
  • the facial features of the characters in the image obtained in the portrait mode are obvious, and the image obtained in the night scene mode is high-definition.
  • video recording mode video shooting can be performed, and a video file of the captured image can be generated.
  • Each mode is also used in the corresponding scene.
  • the video recording mode is used as an example to illustrate the video shooting method provided in the embodiment of the present application. It can be understood that the video shooting method is not only applicable to the video recording mode, but also can be used in a scene where video images are displayed in real time, such as for video calls, or for live broadcast, or for real-time image preview.
  • the camera generates the video image or generates the exposure time of the preview image, including the first exposure time and the second exposure time, and the second exposure time can be two or two times intercepted from the first exposure time. above exposure time.
  • the camera may determine to use the first exposure time or the second exposure time according to the current shooting state of the mobile phone for image exposure and collection.
  • the shooting state refers to the relative state between the mobile phone and the objects in the image collected by the mobile phone when the mobile phone collects images through the camera application.
  • the relative state between the mobile phone and the object in the collected images may include a state of relative movement and a state of relative stillness.
  • the current shooting state can be described as a motion state. If the mobile phone and the object in the collected image are in a relatively static state, the current shooting state can be described as a stable state.
  • the objects including mountains, trees, rocks, buildings, etc.
  • the mobile phone in a state of relative motion relative to the objects in the collected images by running or turning quickly, taking a vehicle or moving quickly by a special shooting mobile tool.
  • the shooting state of the mobile phone may be in motion.
  • a user uses a mobile phone to take pictures of ocean waves, the movement of athletes, erupting volcanoes, and flowing crowds.
  • the objects in the captured image include moving objects.
  • moving objects in the captured image including moving ocean waves, athletes, volcanoes, and crowds, are in a moving state relative to the mobile phone, and the shooting state of the mobile phone may be in a moving state.
  • Stationary users photograph stationary objects.
  • the user uses a mobile phone to shoot still images.
  • the user shoots a landscape video at a slow speed, including mountains, trees, rocks, buildings, etc.
  • the speed of the mobile phone relative to the relative movement in the collected images is relatively small.
  • the mobile phone s shooting The state may be steady state.
  • the object photographed by the fast-moving user may be in a relatively static state with the user.
  • two cars A and B are in a relatively static driving state, and the user rides in car A and takes a video of car B through a mobile phone.
  • the mobile phone is in a relatively static state relative to the object in the captured image, that is, the car B, and the shooting state of the mobile phone may be in a stable state.
  • the shooting state of the mobile phone may be a moving state.
  • the shooting state of the mobile phone is in the motion state, in the same frame of image, the shorter the exposure time, the longer the relative movement distance of the object with relative motion within the exposure time, and the image exposed within the exposure time will appear to move. The greater the chance of blurring. Therefore, when the shooting state of the mobile phone is a motion state, the second exposure time is usually selected for exposure and image acquisition.
  • the first exposure time can be selected for image collection.
  • the first exposure duration may be determined according to the frame rate of the collected video.
  • the preset video frame rate is 30 frames per second. That is, in the generated video, 30 frames of images need to be played in one second.
  • the duration of collecting one frame of image is 1/30 second, that is, 33.3 milliseconds.
  • non-overlapped exposure mode can be used for exposure and image acquisition, that is, adjacent image generation cycles do not overlap in time, that is, the first exposure duration for generating any frame image
  • the duration of the readout data and the first exposure duration of other frames and the duration of the readout data do not overlap.
  • the first exposure duration can be determined in combination with parameters of the exposure modes (non-overlap exposure mode and overlap exposure mode), including exposure interval duration and data readout duration.
  • the exposure mode set by the mobile phone is an overlapping exposure mode.
  • the exposure of the next frame image starts after the exposure of the previous frame is completed, and the duration of data readout is 3.3 milliseconds, so the first exposure duration is 30 milliseconds.
  • the second exposure duration is the exposure duration obtained by time-truncating the first exposure duration.
  • the time interception may include time interception without interval, time interception with interval, equal duration interception, non-equal duration interception, and the like.
  • the first exposure duration is equal to the sum of two or more intercepted second exposure durations.
  • the two or more second exposure durations may be equal exposure durations.
  • the brightness information of two or more intercepted images corresponding to two or more second exposure durations can be made the same, which can be more convenient
  • the intercepted image is the corresponding image obtained when exposing for the second exposure duration.
  • the first exposure duration is 30 milliseconds
  • the second exposure duration is 10 milliseconds
  • the first exposure duration is greater than the sum of two or more intercepted second exposure durations. That is, an idle period is also included between every two periods of the second exposure duration.
  • the two or more second exposure durations obtained by intercepting are exposure durations of equal duration, so that the two or more interceptions obtained corresponding to the two or more second exposure durations.
  • the brightness information of the images is the same, which can facilitate the registration and comparison between the obtained two or more intercepted images.
  • the first exposure duration is 30ms
  • the second exposure duration is 9ms.
  • An interval of 1 ms is also included between each second exposure time.
  • the first exposure duration is 30 milliseconds
  • the second exposure duration is 9 milliseconds
  • the number of second exposure durations is 3. 9 milliseconds*3 ⁇ 30 milliseconds. Between every two second exposure durations, an idle period of 1 millisecond may be included.
  • the second exposure duration intercepted by the first exposure duration may have different durations.
  • the first exposure duration is 30 milliseconds.
  • the second exposure duration is a plurality of periods of different durations, and may include three durations such as 8 milliseconds, 10 milliseconds, and 12 milliseconds. It can be understood that the intercepted second exposure duration should not be limited to the quantity of the intercepted second exposure duration and the size of the intercepted second exposure duration.
  • the intercepted second exposure durations of different durations may also include that the sum of the second exposure durations is less than the first exposure duration.
  • the image acquisition mode corresponding to the first exposure duration is the non-overlapping mode, or the The image acquisition mode corresponding to the first exposure duration is an overlapping mode, but when there is still an idle duration between two adjacent exposure durations, the sum of two or more second exposure durations generated may be greater than the second exposure duration 1. Exposure time.
  • the first exposure duration is 30 milliseconds, and the distance between two adjacent first exposure durations is greater than 10 milliseconds, and the generated three second exposure durations may be equal to 11 milliseconds.
  • each second exposure duration may be different in duration.
  • each second exposure duration may or may not include a space duration.
  • the mobile phone can receive the setting instruction from the user, and determine that the currently adopted exposure mode is the first exposure mode (the mode using the first exposure duration for exposure) or the second exposure mode (the mode using the second exposure duration exposure mode).
  • a mode selection button may be displayed on the video recording interface.
  • the mode selection button may be a button.
  • the electronic device can control switching between the first exposure mode and the second exposure mode.
  • the mobile phone may also select the first exposure mode and the second exposure mode in the video shooting interface by means of mode selection.
  • the mobile phone may determine the currently selected exposure mode (the first exposure mode or the second exposure mode) according to the detected shooting state. According to the detected shooting state, it can be determined whether the current shooting state of the mobile phone is a stable state. If it is a stable state, select the first exposure mode, and if it is an unsteady state, that is, a motion state, select the second exposure mode. Wherein, the stable state and the non-stable state may be determined according to the degree of motion blur in the captured video image. When the degree of motion blur in the captured video image is greater than a preset blur threshold, it is considered that the mobile phone is in an unstable state. If the degree of motion blur in the captured video image is less than or equal to the preset blur threshold, the mobile phone is considered to be in a stable state.
  • the parameter thresholds of the stable state and the unsteady state may be predetermined by means of statistical data, and the parameter thresholds may include motion parameter thresholds and/or image parameter thresholds.
  • the quantitative parameter thresholds for determining the shooting state of the mobile phone in different application scenarios can be obtained statistically.
  • the way of determining the shooting state can be determined according to the currently selected application scene, or can be judged directly according to the sensing data, and/or the shooting state can be judged based on the image data.
  • the way of determining the current shooting state of the mobile phone may include making a judgment based on sensory data and/or making a judgment based on image data.
  • Sensing data (or also referred to as motion data) of the mobile phone may be collected through the motion sensor, including translational velocity, translational acceleration, and/or angular displacement velocity of the mobile phone.
  • Motion data can be collected by an acceleration sensor and/or an angular velocity sensor.
  • the angular velocity sensor may include a gyroscope or the like. According to the acceleration value detected by the acceleration sensor, the moving speed of the mobile phone can be determined.
  • the shooting state of the mobile phone When determining the shooting state of the mobile phone according to the sensing data, if it is detected that the translational speed of the mobile phone is greater than the preset first speed threshold, or the translational acceleration of the mobile phone is greater than the preset first acceleration threshold, then it is determined that the mobile phone is in an unstable state. Alternatively, if it is detected that the angular displacement velocity of the mobile phone is greater than a predetermined first angular velocity threshold, it is determined that the mobile phone is in an unstable state.
  • the mobile phone when it is detected that the translation speed of the mobile phone is greater than the preset second speed threshold, and the angular velocity is greater than the predetermined second angular velocity threshold, the mobile phone is in an unstable state, or it is detected that the mobile phone translation acceleration is greater than the preset second Acceleration threshold, and the angular displacement velocity is greater than the preset second angular velocity threshold, the mobile phone is in an unstable state.
  • the first velocity threshold may be greater than the second velocity threshold
  • the first angular velocity threshold may be greater than the second angular velocity threshold.
  • the first speed threshold may be equal to the second speed threshold
  • the first angular speed threshold may be equal to the second angular speed threshold.
  • the first velocity threshold, the second velocity threshold, the first angular velocity threshold, and the second angular velocity threshold may be related to the first exposure duration, the longer the first exposure duration, the first velocity threshold, the second velocity threshold, the first angular velocity threshold , the smaller the second angular velocity threshold.
  • the first speed threshold and the second speed threshold may be greater than 0.05m/s and less than 0.3m/s, such as 0.1m/s
  • the first angular velocity threshold and the second angular velocity threshold may be greater than 0.02 ⁇ /s, and less than 0.08 ⁇ /s, for example, it can be 0.05 ⁇ /s
  • the first acceleration threshold and the second acceleration threshold can be greater than 0.05m/s 2 , and less than 0.3m/s 2 , for example, it can be 0.15m/s 2 .
  • the shooting application is scene 4
  • the sensory data and image data can be used for judgment, if it is detected that the sensory data is greater than the predetermined sensing threshold, including, for example, the translation speed of the mobile phone is greater than the predetermined first speed threshold, or the angle of the mobile phone
  • the displacement velocity is greater than the predetermined first angular velocity threshold, or the mobile phone is moving faster than the preset second velocity threshold, and the moving angular velocity is greater than the predetermined second angular velocity threshold
  • the image data judges that the mobile phone is in a stable state, it can be comprehensively determined that the mobile phone is in a stable state.
  • the relatively static state can be identified. For example, the user's riding state can be identified. When the picture taken while riding is a relatively still picture, the mobile phone can be considered to be in a stable state.
  • the shooting state of the mobile phone can be determined based on the image data.
  • the shooting status of the mobile phone can be determined by comparing two frames of images collected within a predetermined time interval.
  • the two frames of images at the predetermined time interval may be images of two adjacent frames, or may also be images collected within other set time intervals, such as images collected within a time interval of 100 milliseconds to 500 milliseconds .
  • the ratio of the changed pixels in the two frames of images to the total pixels of the image can be determined first, and then the determined ratio can be compared with a preset pixel ratio threshold. If the ratio is greater than the preset pixel ratio threshold, it is determined that the mobile phone is in an unstable state when two frames of images are captured. Otherwise, make sure the phone is in a steady state when the two frames are taken. Since the shooting state has a certain continuity, the second exposure mode can be used in the next image acquisition, that is, two or more second exposure durations can be selected to obtain corresponding two or more frames of images.
  • the total number of pixels of the two adjacent frames of images selected for comparison is N1
  • the number of changed pixels is N2
  • the preset pixel ratio threshold is Y1 if N2/N1 is greater than or equal to Y1 , it is determined that the mobile phone is in an unstable state. If N2/N1 is smaller than Y1, it is determined that the mobile phone is in a stable state.
  • the pixel ratio threshold may be any value in 8%-12%, for example, it may be 10%.
  • the changed pixel in the two frames of images it may be based on the similarity of the pixel and/or the grayscale change of the pixel.
  • the calculated distance may be compared with a preset distance threshold, and if the calculated distance is greater than the preset distance threshold, it is determined that two pixels have changed. If it is less than or equal to a preset distance threshold, it is determined that there is no change in the two pixels.
  • the three-dimensional vector corresponding to pixel 1 is (R1, B1, G1)
  • the three-dimensional vector corresponding to pixel 2 is (R2, G2, B2)
  • the distance between the three-dimensional vectors corresponding to pixel 1 and pixel 2 can be expressed as: (R1 The square root of the value of -R2) ⁇ 2+(G1-G2) ⁇ 2+(B1-B2) ⁇ 2, that is, the distance between pixel 1 and pixel 2 in the color space.
  • the three-dimensional vector corresponding to pixel 1 is (R1, B1, G1)
  • the three-dimensional vector corresponding to pixel 2 is (R2, G2, B2)
  • l1 sqrt(r1*r1+g1*g1+b1*b1)
  • l2 sqrt(r2*r2+g2*g2+b2*b2)
  • the angle between the three-dimensional vectors corresponding to pixel 1 and pixel 2 can be calculated.
  • RGB value of a pixel into an HSI (Hue, Saturation, Grayscale, and Intensity) value, according to the difference in the converted hue, the difference in brightness, the difference in grayscale and color saturation to determine whether two pixels have changed.
  • HSI Human, Saturation, Grayscale, and Intensity
  • corresponding thresholds can be set to determine whether the two pixels that need to be compared have changed. When any of them is greater than the preset corresponding threshold , then determine that the pixel has changed.
  • the shooting state of the mobile phone may be determined according to the acquired edge information of two frames of images shot within a predetermined time interval.
  • the predetermined duration may be a time interval between acquiring two adjacent frames of images, or a time interval between acquiring two adjacent M frames of images, and M may be a natural number less than 10.
  • edge detection is performed on the captured images to obtain edge information included in the two frames of images. If the mobile phone and the subject are relatively still, the sharpness of the edge area of the two captured images will not change significantly. Therefore, whether the edge of the image changes can be determined according to the sharpness change threshold of the edge information of the two frames of images. Counting the area that changes in the image, or the ratio of the area with reduced sharpness in the image to the edge area of the image, and if the ratio is greater than or equal to a predetermined edge ratio threshold, it is determined that the mobile phone is in an unstable shooting state. Otherwise, it is in a stable shooting state.
  • the edge ratio threshold may be any value in 15%-25%, for example, it may be 20%.
  • the mobile phone when the mobile phone turns on the video recording mode, it can receive a user's setting instruction to determine whether to use the second exposure mode for exposure, or can determine whether to use the second exposure mode for exposure according to the shooting status of the mobile phone. In a possible implementation manner, it may also be determined whether to use the second exposure mode for exposure according to the type of application that starts the shooting, or according to the user's shooting habits.
  • the mobile phone displays a preview image, or receives a recording instruction, and the mobile phone displays a preview image and generates a video file.
  • the mobile phone After the mobile phone turns on the video recording mode in the camera application, it is determined that the mobile phone is currently exposing and reading image data in the first exposure mode or the second exposure mode according to the set exposure mode determination method.
  • the mobile phone after determining whether the mobile phone is currently in a stable state or an unstable state based on image data and sensor data, according to the corresponding relationship between the shooting state and the exposure mode, if the current shooting state If it is in a stable state, select the first exposure mode for exposure, for example, to expose a frame according to an exposure time of 30 milliseconds, that is, to generate an image corresponding to one frame of video. If the current demolition state is an unstable state, exposure may be performed according to the second exposure mode, for example, three exposure durations of 10 milliseconds are used for exposure to obtain three frames of images. The obtained three frames of images are fused to obtain one frame of video, or further through ISP (image signal processing), the preview image is displayed on the screen, or a video file is generated.
  • ISP image signal processing
  • the intercepted image generated by the second exposure mode includes less motion blur information, and therefore has Clearer display effect.
  • the motion area and the non-motion area in the intercepted image may be fused in different fusion methods.
  • the motion area may be understood as an area where motion blur occurs in the image, and the non-motion area may be understood as an area where motion blur does not occur in the image.
  • fusion of the non-moving regions of two or more frames of intercepted images may adopt fusion methods such as Alpha fusion and multi-band fusion to fuse the non-moving regions of the intercepted images after registration transformation into one frame of images.
  • the transparency corresponding to each intercepted image can be set in advance, or the transparency corresponding to each intercepted image can be determined according to the second exposure time, and each intercepted image is multiplied by the corresponding transparency and summed to obtain the fused An image of the non-moving area of the cropped image.
  • the intercepted image generated in the second exposure mode includes three frames, namely P1, P2 and P3.
  • the transparency corresponding to the three frames of images is a1, a2, and a3, and it can be determined that the image of the fused non-moving area is: P1*a1+P2*a2+P3*a3.
  • the transparency corresponding to the intercepted image can be determined according to the exposure duration of the intercepted image.
  • any frame can be selected as the image of the motion area and fused with the non-motion area to obtain an image in a frame of video.
  • the motion area of the intermediate intercepted image may be selected, and fused with the fused non-motion area to obtain an image in one frame of video.
  • the motion area in the second frame image can be selected, and the fused non-motion area Perform fusion. If the generated intercepted image is 4 frames, the motion area in the second frame or the third frame image can be selected to be fused with the fused non-motion area.
  • the motion area can be further optimized, which can include filtering the motion area of the frame, including guided filtering or bidirectional filtering such as edge-preserving filtering to reduce the motion area. noise and improve image quality in motion areas.
  • non-local mean filtering non-local means, NLM
  • Gaussian filtering can be used to reduce the noise in the moving area and improve the image quality of the moving area.
  • the motion area and the non-motion area included in the intercepted image may be determined by means of image comparison. As shown in FIG. 16 , in order to improve the accuracy of the determined moving area and non-moving area in the intercepted image, the images may be matched and transformed before image comparison.
  • the generated intercepted images are 3 frames, namely the N-1th frame, the Nth frame and the N+1th frame, and the Nth frame can be used as the reference frame, and the N+1th frame and the Nth frame
  • the N-1 frame performs image registration with the reference frame, so as to determine the transformation matrix of the N+1 frame when the N+1 frame is registered with the N frame, and when determining the N-1 frame and the N frame registration, The transformation matrix for frame N-1.
  • image registration methods can include such as mean absolute difference algorithm, absolute error sum algorithm, error sum of squares algorithm, average error sum of squares algorithm, normalized product correlation algorithm, sequential similarity detection algorithm, local gray value coding algorithm Wait.
  • the N-1th frame and the N+th frame can be respectively calculated according to the determined transformation matrix Registration transformation is performed for one frame, so that the generated intercepted image is an image after registration transformation, and pixel comparison can be performed more accurately, thereby more effectively determining the moving area and the non-moving area in the image.
  • the transformed N+1th frame image (N'+1), the transformed N-1th frame image Frame image (N'-1), the reference image is grayscaled, and then the pixels of the N+1th frame image and the N-1th frame image after the grayscale processing are respectively grayscaled with the pixels of the reference image Value comparison, if the grayscale value is greater than a predetermined grayscale threshold, such as greater than a preset grayscale threshold, for example, the grayscale threshold can be any value in 30-70, then it is determined that the pixel is in the compared two frame images , it is determined to belong to the motion area. Otherwise it belongs to the non-moving area.
  • a predetermined grayscale threshold such as greater than a preset grayscale threshold, for example, the grayscale threshold can be any value in 30-70
  • the angle between the three-dimensional vectors corresponding to the two pixels can be used to determine whether the compared pixel belongs to motion in the compared images of the two frames area or a non-sports area. According to the comparison result of the pixels of the transformed image and the pixels of the reference image, the moving area and the non-moving area determined by the transformed image and the reference image can be obtained.
  • the exposure time is different, when performing non-moving region fusion, more abundant scene information can be fused to obtain a better scene image of the non-moving region.
  • the moving area in the intercepted image with the shortest exposure time can be fused with the fused non-moving area, so that the fused image has more for a clear zone of motion.
  • FIG. 16 uses three frames of intercepted images for illustration.
  • the generated intercepted images are two frames, which are the Nth frame and the N+1th frame respectively.
  • Any frame can be determined, such as the Nth frame as the reference image.
  • Register the N+1th frame with the reference image determine the transformation matrix between the N+1th frame and the reference image, transform the N+1th frame according to the transformation matrix, and obtain the transformed N+1th frame image(N'+1).
  • the moving area is determined according to the positions of the changed pixels, and the non-moving area is determined according to the positions of the unchanged pixels.
  • the non-moving area of the Nth frame image and the N'+1 frame image is fused through Alpha fusion or multi-band fusion to obtain the non-moving area after fusion.
  • the motion area of the Nth frame image, or the image of the motion area of the N'+1 frame is filtered, and then fused with the fused non-motion area to obtain a frame of video image.
  • the mobile phone can be modularized so that the mobile phone can obtain a deblurred video.
  • the mobile phone may include a collection module 1701 and a deblurring module 1702 .
  • the acquisition module 1701 is used to first determine the first exposure duration under the current shooting parameters. Then perform time segment interception with the first exposure duration to obtain two or more second exposure durations, and perform exposure and read data with two or more second exposure durations to obtain two or more second exposure durations Capture image.
  • the deblurring module 1702 is used to perform fusion processing according to the collected two or more intercepted images to obtain a fused image.
  • a frame of image after decoration and fusion is a frame of image for video, which is used for preview display or generation of video file. Since the two or more intercepted images captured by the first exposure time are fused into one frame image, compared with the one frame image obtained by the first exposure time length, the number of images obtained by the two in the same time length is also the same , therefore, the fused image obtained by intercepting the exposure time can meet the video frame rate requirement of the mobile phone.
  • the modules divided into the mobile phone may also include a switching module 1703 .
  • the switching module 1703 can detect the shooting status of the mobile phone in real time. If it is detected that the shooting state of the mobile phone is in a stable state, the first exposure time is used for exposure, and a frame of image is directly obtained by reading out the data for preview display or for generating an image in a video file. If it is detected that the shooting state of the mobile phone is in an unstable state, the first exposure duration is intercepted for a time period to obtain two or more second exposure durations, and the exposure is performed according to the two or more exposure durations, read Output the data to get two or more intercepted images.
  • the detection of the shooting state may be determined according to the sensing data of the mobile phone.
  • the sensor data of the mobile phone can be read according to the acceleration sensor and/or the angular velocity sensor of the mobile phone, and the translation speed of the mobile phone can be determined according to the acceleration data.
  • the translation speed of the mobile phone can be compared with a preset speed threshold, or the angular velocity of the mobile phone can be compared with a preset angular velocity threshold to determine whether the mobile phone is in a stable state.
  • the shooting status can also be detected based on the images captured by the mobile phone. For example, it is possible to select pixel changes of two frames of images at a predetermined time interval to determine whether it is in a stable state.
  • whether it is in a stable state can be determined according to the ratio of changed pixels to total pixels in two frames of images.
  • the determination of the changed pixel it may be determined whether the compared pixel is a changed pixel by calculating the similarity of the pixels, the difference of the pixels, and the like.
  • whether the mobile phone is in a stable state can also be determined according to the ratio of the edge area where the sharpness changes in the compared two frames of images to the total edge area.
  • the intercepted images to be fused when performing fusion processing on the generated intercepted images of two or more frames, can be divided into regions, for example, they can be divided into motion regions and non-motion regions, and non-motion regions can be divided into regions.
  • the multi-frame images of the region are superimposed for fusion.
  • the motion area the motion area of one frame of image can be selected and fused with the fused non-motion area to obtain a frame of video image.
  • the electronic device provided in the embodiment of the present application includes corresponding hardware structures and/or software modules for performing various functions.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the example units and algorithm steps described in the embodiments disclosed herein. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Professionals and technicians may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the embodiments of the present application.
  • the embodiments of the present application may divide the above-mentioned electronic device into functional modules according to the above-mentioned method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. It should be noted that the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 18 shows a possible structural diagram of the electronic device involved in the above embodiment.
  • the electronic device 200 includes: a processing unit 1801 , a display unit 1802 and a storage unit 1803 .
  • the processing unit 1801 is configured to manage the actions of the electronic device. For example, the processing unit 1801 may control the exposure mode of the electronic device in video recording mode, and the processing unit 1801 may also control the display content of the display screen of the electronic device and the like.
  • the display unit 1802 is configured to display the interface of the electronic device.
  • the display unit 1802 may be used to display the main interface of the electronic device in the video recording mode, and the display unit 1802 is used to display the preview image in the video recording mode and the like.
  • the storage unit 1803 is used to store program codes and data of the electronic device 200 .
  • the storage unit 1803 can cache the preview image of the electronic device, and the storage unit 1803 is also used to store image processing algorithms in the video recording mode.
  • the unit modules in the above-mentioned electronic device 200 include but are not limited to the above-mentioned processing unit 1801 , display unit 1802 and storage unit 1803 .
  • the electronic device 200 may further include a sensor unit, a communication unit, and the like.
  • the sensor unit may include a light sensor to collect light intensity in the environment where the electronic device is located.
  • the communication unit is used to support communication between the electronic device 200 and other devices.
  • the processing unit 1801 may be a processor or a controller, such as a central processing unit (central processing unit, CPU), a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (applicationspecific integrated circuit, ASIC), Field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof.
  • processors may include application processors and baseband processors. It can implement or execute the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, a combination of one or more microprocessors, a combination of DSP and a microprocessor, and so on.
  • the storage unit 1803 may be a memory.
  • the audio unit may include a microphone, a speaker, a receiver, and the like.
  • the communication unit may be a transceiver, a transceiver circuit, or a communication interface.
  • the processing unit 1801 is a processor (such as the processor 210 shown in FIG. 6 ), and the display unit 1802 can be a display screen (such as a display screen 294 shown in FIG. 6 , and the display screen 294 can be a touch screen, which can be integrated display panel and touch panel), the storage unit 1803 may be a memory (internal memory 221 shown in FIG. 6 ).
  • An embodiment of the present application further provides a chip system, where the chip system includes at least one processor and at least one interface circuit.
  • the processor and interface circuitry may be interconnected by wires.
  • interface circuits may be used to receive signals from other devices, such as the memory of an electronic device.
  • an interface circuit may be used to send signals to other devices, such as a processor.
  • the interface circuit can read instructions stored in the memory and send the instructions to the processor.
  • the electronic device can be made to execute various steps in the above-mentioned embodiments.
  • the chip system may also include other discrete devices, which is not specifically limited in this embodiment of the present application.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium includes computer instructions, and when the computer instructions are run on the above-mentioned electronic device, the electronic device is made to perform the various functions or steps performed by the mobile phone in the above-mentioned method embodiment .
  • the embodiment of the present application also provides a computer program product, which, when the computer program product is run on a computer, causes the computer to execute each function or step performed by the mobile phone in the method embodiment above.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be Incorporation or may be integrated into another device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or may be distributed to multiple different places . Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium Among them, several instructions are included to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: various media that can store program codes such as U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

本申请属于图像处理领域,提出了一种视频拍摄方法、电子设备及计算机可读存储介质。该方法包括:电子设备确定当前的拍摄状态为运动状态,获取当前拍摄参数中的第一曝光时长;所述电子设备对所述第一曝光时长进行时间截取获得截取图像,所述截取图像包括两帧或两帧以上的图像,且所述截取图像与时间截取得到的曝光时长对应;所述电子设备将所述截取图像融合为一帧图像,根据融合后的图像生成视频。在电子设备处于运动状态的拍摄时,通过对第一曝光时长进行时间截取生成截取图像,使得截取图像的曝光时长小于第一曝光时长,对截取图像融合后,可以得到运动区域更为清晰的图像,从而能够降低视频动态模糊度,减少图像伪影,提高视频质量。

Description

视频拍摄方法、电子设备及计算机可读存储介质
本申请要求于2021年06月25日在中国专利局提交的、申请号为202110714588.7、申请名称为“视频拍摄方法、电子设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于图像处理领域,尤其涉及视频拍摄方法、电子设备及计算机可读存储介质。
背景技术
随着科技的发展,越来越多的电子设备配置有摄像头。通过运行相机应用程序,可以控制摄像头的拍摄参数,包括如感光度、快门速度、光圈大小等,可以实现不同画质的照片或视频的拍摄。其中,感光度越高、快门速度越慢或光圈越大,则照片越亮,感光度越低、快门速度越快或光圈越小,则照片越暗。
电子设备在拍摄视频时,所拍摄的画面中可能存在运动的物体,或者拍摄者持有的电子设备可能会发生抖动,使得电子设备与拍摄对象发生相对位移。由于相对位移的出现,在生成一帧画面的曝光时长范围内,拍摄对象在图像传感器上的曝光位置可能发生移动,从而导致所拍摄的视频存在伪影,或者所拍摄的视频存在动态模糊,影响视频的拍摄质量。
发明内容
本申请提供一种视频拍摄方法、电子设备及计算机可读存储介质,可以在电子设备拍摄视频的过程中,减少视频存在的伪影,或减少所拍摄视频的动态模糊,提高视频的拍摄质量。
为实现上述技术目的,本申请采用如下技术方案:
第一方面,本申请提供一种视频拍摄方法,该方法应用于电子设备,该方法可以包括:所述电子设备确定当前的拍摄状态为运动状态,获取当前拍摄参数中的第一曝光时长;所述电子设备对所述第一曝光时长进行时间截取获得截取图像,所述截取图像包括两帧或两帧以上的图像,且所述截取图像与时间截取得到的曝光时长对应;所述电子设备将所述截取图像融合为一帧图像,根据融合后的图像生成视频。
其中,电子设备的拍摄状态可以包括运动状态和非运动状态(即稳定状态)。在电子设备确定当前拍摄状态处于运动状态,则对所获取的第一曝光时长进行时间截取,将第一曝光时长截取成为两个或两个以上的第二曝光时长,由所截取的两个或两个以上的第二曝光时长,分别进行曝光和图像采集,得到两个或两个以上的截取图像。将所截取的两个或两个以上的截取图像融合为视频的一帧图像,以生成与第一曝光时长的帧率一致的视频。通过对第一曝光时长进行时间截取,使得截取得到的第二曝光时长小于第一曝光时长。根据第二曝光时长进行曝光和图像采集时,由于曝光时长更短,因此,在第二曝光时长内发生相对移动的距离,相对于第一曝光时长内发生相对移动的距离更小,因此,在截取图像 中产生动态模糊和伪影的程度更小,从而能够有效的提高融合后的图像的质量,减少图像的伪影和动态模糊。
其中,所述拍摄状态,可以理解为电子设备与拍摄的图像中的物体的相对状态。可以根据所采集的图像的像素的变化,或者通过电子设备的传感器采集的数据,来确定电子设备的拍摄状态。
当确定电子设备的拍摄状态处于稳定状态时,电子设备可以直接通过第一曝光进长进行曝光和图像采集,将所采集的图像直接作为视频的一帧图像。即相对于运动状态,电子设备处于稳定状态时,不需要对第一曝光时长进行时间截取以及对截取图像进行图像融合。
通过对电子设备的拍摄状态进行检测,确定电子设备处于稳定状态时,可以不需要对第一曝光时长进行时间截取,避免在第一曝光时长内多次曝光、数据读出和图像融合,从而有利于提高视频图像的获取效率。
在电子设备确定当前的拍摄状态的实现方式中,可以通过运动传感器采集电子设备的传感数据,根据传感数据确定电子设备当前的拍摄状态。
其中,传感数据可以包括平移加速度和角位移加速度。可以通过加速度传感器或陀螺仪等设备,确定所述电子设备的平移加速度和角位移加速度。通过平移加速度可以计算电子设备的平移速度,通过角位移加速度,可以计算电子设备的角位移速度。可以根据平移速度、平移加速度、角位移速度、角位移加速度中的一种参数,或者几种参数相结合的方式,更为可靠的确定电子设备的拍摄状态。
在电子设备确定当前的拍摄状态的实现方式中,也可以根据电子设备所拍摄的图像中的像素点的变化,来确定电子设备当前的拍摄状态。
比如,电子设备按照预定的时长间隔采集两帧图像,比较两帧图像中的像素点,确定发生变化的像素点的数量。根据发生变化的像素点的数量与图像总的像素点的比值来反应图像的变化程度。当该比值大于预先设定的比例阈值,则表示图像内容变化剧烈,电子设备当前的拍摄状态处于运动状态,反之则处于稳定状态。
其中,比例阈值的大小,可与所比较的两帧图像的时间间隔关联。随着该时间间隔的增加,可相应的提高比例阈值的大小。当两帧图像为视频中的相邻图像时,可以根据视频的帧率确定该比例阈值。
通过图像比较的方式确定电子设备的拍摄状态,可以不必读取加速度传感器(平移加速度传感器或角位移加速度传感器)数据,使得不具有加速度传感器的电子设备能够有效的确定电子设备当前的拍摄状态,提升视频拍摄方法可应用的设备范围。
在进行图像比较时,可以对两帧图像的像素逐个进行比较。由于电子设备在两帧图像时,电子设备本身可能有发生平移或角位移,因此,为了提高像素比较的准确度,可以先将需要比较的图像进行配准,通过配准后的图像进行相似度比较,提高像素比较的精度。
在进行像素比对时,可以根据像素点的颜色的相似度,来确定两个像素点是否相似。或者也可以根据像素点对应的灰度值、色调、色饱和度、亮度的差值,来确定两个像素点是否相似。
通过像素的颜色进行相似度比较时,可以确定需要判断的两个像素点的RGB值对应的三维向量,然后计算两个三维向量的距离,即颜色空间的距离的方式,来确定两个像素是否相似。
在本申请实施例中,在确定电子设备的拍摄状态的实现方式中,还可以包括根据电子设备所采集的图像的锐度信息的变化的方式。通过图像的锐度信息来确定电子设备的拍摄状态时,电子设备可以获取在预定时长间隔内所拍摄的两帧图像;对所述两帧图像进行边缘检测,确定两帧图像的边缘的锐度发生变化的区域;如果锐度发生变化的区域与边缘区域的比值大于或等于预定的边缘比例阈值,所述电子设备的拍摄状态为运动定状态;如果锐度发生变化的区域与边缘区域的比值小于预定的边缘比例阈值,所述电子设备的拍摄状态为稳定状态。
由于电子设备在拍摄相对运动的物体时,整体画面发生偏移,导致画面中的物体的轮廓发生模糊,从而使得物体的轮廓的锐度降低。通过锐度信息的比较,可以检测到电子设备当前处于非稳定状态(或者稳定状态与运动状态结合的状态),可以根据该检测结果,预测未来一段时间内的拍摄状态的持续趋势,根据该持续趋势确定电子设备的曝光时长,以生成图像质量更佳的视频。
截取第一曝光时长生成两个或两个以上的第二曝光时长,根据第二曝光时长可以生成两个或两个以上的截取图像。为了提高视频的图像质量,以及保持视频帧率的稳定性,需要将所生成的两个或两个以上的截取图像进行融合处理。
在可能的图像融合实现方式中,电子设备可以确定所述截取图像中的运动区域和非运动区域;所述电子设备将所述截取图像中的非运动区域的图像进行融合,结合预先确定的所述截取图像中的指定图像的运动区域的图像,生成一帧图像。
将非运动区域进行图像融合时,可以根据两个图像的非运动区域的数据进行画质增强处理,得到融合后的非运动区域。对于运动区域,直接采用预先确定的截取图像中的指定图像的运动区域,组合生成一帧图像。由于非运动区域进行画质增强,且由于第二曝光时长更短,截取图像的运动区域中的图像相对于第一曝光时长的图像更为清晰,因此,融合后的图像能够降低动态模糊,减少图像伪影。
在确定运动区域和非运动区域的实现方式中,电子设备可以将所述截取图像进行配准变换,得到基准图像和变换图像;所述电子设备计算所述变换图像和所述基准图像之间的像素差;当所述像素差大于或等于预定的像素差阈值时,所述电子设备确定该像素差对应的像素点属于运动区域;当所述像素差小于预定的像素差阈值时,所述电子设备确定该像素差对应的像素点属于非运动区域。
通过将截取图像进行配准后,对截取图像的像素逐个进行比较,确定每个像素所属的区域。在具体实施过程中,还可以对确定的运动区域进行过滤和筛选,以提升优化效率。
在进行图像配准变换时,可以将截取图像中的其中一个图像确定为基准图像,将所述截取图像中的其它图像与所述基准图像进行配准处理,确定其它图像与所述基准图像之间的变换矩阵;根据所确定的变换矩阵对所述其它图像进行图像变换,得到变换图像。
其中,基准图像可以为截取图像中处于中间位置的图像。比如,截取图像可以包括3个,可以将第2个截取图像作为基准图像,将第1个截取图像和第3个截取图像作为其它图像。根据第1个截取图像与第2个截取图像的变换矩阵,对第1个截取图像进行配准变换,以及根据第3个截取图像与第2个图像的变换矩阵,对第3个截取图像进行配准变换。
当然,不局限于此,也可以将其中任意截取图像作为基准图像,可以由变换后的图像进行比较,也可以由变换后的图像与基准图像进行比较。
在可能的实现方式中,还可以对生成的一帧图像中的运动区域进行滤波处理,得到滤波后的图像。从而能够更进一步提升所生成的一帧图像的运动区域的图像质量。
第二方面,本申请实施例提出了一种电子设备,该电子设备包括摄像头,用于采集图像;显示屏,用于显示所采集的图像;一个或多个处理器;存储器;以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如第一方面所述的视频拍摄方法。
第三方面,本申请实施例提出了一种计算机可读存储介质,所述计算机可读存储介质包括计算机指令,当所述计算机指令在计算机上运行时,使得所述计算机执行如第一方面所述的视频拍摄方法。
附图说明
图1为本申请实施例提供的一种相机系统示意图;
图2为本申请实施例提供的一种非重叠模式的图像采集示意图;
图3为本申请实施例提供的一种重叠模式的图像采集示意图;
图4为本申请实施例提供的一种视频拍摄方法的实施场景示意图;
图5为本申请实施例提供的又一视频拍摄方法的实施场景示意图;
图6为本申请实施例提供的一种电子设备的结构示意图;
图7为本申请实施例提供的一种软件结构示意图;
图8为本申请实施例提供的一种电子设备的拍照流程在硬件抽象层和框架层之间的运行过程示意图;
图9为本申请实施例提供的一种视频拍摄方法的实现流程示意图;
图10为本申请实施例提供的一种第一曝光时长与第一曝光时长的关系示意图;
图11为本申请实施例提供的又一第一曝光时长与第一曝光时长的关系示意图;
图12为本申请实施例提供的又一第一曝光时长与第一曝光时长的关系示意图;
图13为本申请实施例提供的又一第一曝光时长与第一曝光时长的关系示意图;
图14为本申请实施例提供的一种电子设备的拍摄模式切换操作示意图;
图15为本申请实施例提供的一种视频拍摄流程示意图;
图16为本申请实施例提供的一种图像融合示意图;
图17为本申请实施例提供的一种手机的模块化划分示意图;
图18为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面示例性介绍本申请实施例可能涉及的相关内容。
(1)相机系统,或者也称为摄像系统。
参见图1,图1为本申请实施例提供的相机系统示意图。如图1所示,相机系统可以包括镜头(Lens)模块11、图像传感(Sensor)模块12、图像信号处理(Image Signal Processor,ISP)模块13和编码输出模块14。
其中,镜头模块11用于通过光束变换,将成像目标的光束变换至图像传感模块12的光敏面。影响光束变换的控制参数包括镜头焦距(Focal Length)、光圈(Iris)、景深(Depth of Field,DOF)、曝光时长、感光度等参数。通过调节和控制上述参数,可以实现对图像 传感模块12所采集的图像的变化,包括如视角、进光量等信息的变化。
所述图像传感模块12用于将镜头模块11所变换的光信号,通过曝光,将光信号转换为电信号,通过读取曝光后的传感器的寄存器,可输出原始图像。可以根据镜头模块11的控制参数,包括如曝光时长、光圈大小、感光度等参数,调整所输出的原始图像画面质量,可以输出原始图像,比如可以为Bayer(拜耳阵列)格式的原始图像至图像信号处理模块13。
所述图像信号处理模块13用于对原始图像进行图像处理,包括但不限于坏点消除、提升饱和度、提升边缘平滑度、提升照片清晰度、改善预览清晰度等。
所述编码输出模块14可用于将图像信号处理模块13所输出的图像进行编码处理,输出目标帧率的视频,或者输出其它格式要求的照片等。
(2)曝光(Exposure)和读出数据(Sensor Readout)。
相机在图像采集过程中,也即图像传输至ISP处理前,包括两个部分,分别为曝光(Exposure)和读出数据(Sensor Readout)。读出数据是在曝光完成后,从图像传感模块的寄存器读出数据的过程。
图像采集过程包括两种常见的方法,即重叠(overlapped)模式和非重叠(non-overlapped)模式。
图2为本申请实施例提供的非重叠模式的图像采集示意图。如图2所示,相机在每个图像采集周期开始之前,已经完成上一个图像采集周期的读出数据的过程。
比如,在图2所示的图像采集过程中,根据预先设定的曝光时长,完成第N帧图像的曝光,在曝光完成后,读出传感器的寄存器所寄存的图像数据,在读出数据完成后,开始进行第N+1帧图像的曝光,以及在第N+1帧图像曝光完成后,读出传感器的寄存器所寄存的第N+1帧图像数据。
图3为本申请实施例提供的重叠模式的图像采集示意图。如图3所示,相机在读出数据时,可以与下一帧图像的曝光时间出现重叠。在同一个时刻内,相机执行两个操作,即第N帧图像的读出数据以及第N+1帧图像的曝光。由于相机在同样时长内执行的操作更多,因此,在重叠模式下,可以采集更多的图像。
(3)动态模糊
动态模糊也可以称为运动模糊,是指所拍摄的图像中,由于图像中包括快速移动的物体而造成的明显的模糊的痕迹。
产生动态模糊的原因在于,当相机拍摄视频时,由于技术的限制,所拍摄的图像所表现的不是单一时刻的即时影像,而是一段时间内的场景。该场景的影像会表现物体在曝光时间内所有位置的组合以及相机的视角。在这样的图像中,相对相机有相对运动物体将会看起来模糊或被晃动。
本申请实施例提供的一种视频拍摄方法,该方法可以应用于电子设备。电子设备在拍摄视频的过程中,显示屏可以实时显示预览图像。当拍摄者手持的电子设备出现晃动,或者拍摄的图像中包括运动的物体,显示屏仍然可以清晰的显示运动图像,减小由于电子设备的晃动或拍摄的图像中包括的运动物体所产生的伪影图像。
图4为本申请实施例提供的视频拍摄方法的场景示意图。如图4所示,电子设备在被用户持有并拍摄视频的过程中,由于用户本身的移动,会使得电子设备相对于拍摄物体的 视角或距离发生变化。或者,由于用户行走时所产生的晃动,包括用户行走时所产生的颠簸,或者用户手臂不稳定所产生的移动,会使得电子设备在拍摄过程中,会使得电子设备相对于拍摄物体产生相对位移。
由于电子设备相对于拍摄物体产生相对位移,或者电子设备相对于拍摄物体的视角或距离发生变化。在同一曝光时长内,发生相对位移的物体,可能会通过图像传感模块记录物体在同一图像中存在的多个位置,从而产生动态模糊。
比如图4所示的场景示意图中,由于电子设备相对于拍摄对象整体产生相对位移,因此,在所拍摄的图像中,会产生整体画面的动态模糊。在移动拍摄过程中,电子设备的预览画面中,可能会显示动态模糊的预览画面。电子设备在接收视频拍摄指令后,如果拍摄过程中,电子设备与拍摄对象出现相对位移,在拍摄的同一帧图像中,可能会记录发生相对位移的物体在多个位置的影像,从而使得图像出现动态模糊。
例如,用户在使用电子设备进行户外直播,或者用户在行走过程中与好朋友进行视频通话时,用户通过电子设备支架持有电子设备。在用户行走过程中,用户的快速位移会使得电子设备与拍摄对象之间发生相对位移。在用户行走过程中,用户行走所带来的晃动,也会使得电子设备相对于拍摄对象出现上下晃动的相对位移。在用户行走过程中,持有电子设备的手臂,可能也会使得电子设备相对于拍摄对象出现晃动或旋转。
图5为本申请实施例提供的又一视频拍摄方法的场景示意图。如图5所示,电子设备在使用过程中,所拍摄的场景包括背景和运动目标。在拍摄过程中,背景相对于电子设备处于静止状态,运动目标相对于电子设备处于运动状态。因此,在同一曝光时长内,背景在曝光后生成的图像中的位置没有发生改变,运动目标在曝光完成后,可能会在曝光过程中记载运动目标在图像中的多个位置,从而在生成的图像中,由于记录运动目标在曝光时长内的多个位置而产生运动目标的动态模糊。
比如图5所示的场景示意图中,运动目标包括高速运动的球体。在某一曝光时长t内,球体从位置A运动至位置B。在这一曝光时长内,通过曝光所记录的球体的位置包括多个,即球体从位置A移动到位置B的过程中的多个位置。在电子设备的显示屏上所显示的图像中,可能会包括清晰的背景图像,以及产生动态模糊的球体。或者,在电子设备所拍摄的视频画面中,可能会包括清晰的背景图像,以及产生动态模糊的球体。
例如,用户在使用电子设备拍摄运动的汽车、行人或拍摄比赛等内容。即使电子设备已经通过支架稳定的固定在地面或其它稳定的设备上,由于拍摄对象中包括运动的汽车、行人或快速移动的运动员,在同一曝光时长内,移动的拍摄目标会出现在同一帧图像中的多个位置,从而使得拍摄的图像出现动态模糊。
目前对于运动模糊的消除,通常根据算法进行图像复原。图像复原算法包括非盲图复原算法和盲图复原算法。其中,非盲图复原算法包括逆滤波复原算法、维纳滤波复原算法、约束最小二乘方复原法和RL(强化学习)迭代算法等,盲图复原算法包括倒频谱法、迭代优化求解算法和神经网络复原算法等。通过算法对图像进行复原时,计算较为复杂,不利于在视频拍摄时实时获得清晰的预览视频,或高效的生成所拍摄的视频。
基于此,本申请实施例提供了一种视频拍摄方法,本申请实施例的方法可以应用于拍摄目标包括运动目标,或者拍摄的电子设备与拍摄目标发生相对移动的场景中。通过本申请实施例所提供的视频拍摄方法,可以在拍摄目标包括运动目标,或者拍摄的电子设备与 拍摄目标发生相对移动的场景中,高效的生成清晰的图像。
例如,在户外直播或在行走过程中,或者拍摄的图像中包括运动目标,电子设备在检测到当前拍摄的图像出现动态模糊时,可以通过改变曝光时长的方法,将未出现动态模糊时所使用的第一曝光时长进行截取,获得两个或两个以上的曝光时长。根据截取的两个或两个以上的曝光时长,可分别读出与截取的曝光时长对应的截取图像。将所读出的截取图像融合为一帧图像,从而使得所生成的视频的帧率与未出现动态模糊的视频的帧率一致。
由于对第一曝光时长进行截取得到两个或两个以上的曝光时长。因此,截取得到的曝光时长要小于第一曝光时长。比如,将第一曝光时长分割为两个均匀的曝光时长,分割得到的曝光时长仅为第一曝光时长的一半。如果将第一曝光时长分割为三个均匀的曝光时长,分割得到的曝光时长仅为第一曝光时长的三分之一。当第一曝光时长分割为N个均匀的曝光时长时,分割得到的曝光时长为第一曝光时长的N分之一。
可以理解的是,上述分割得到的曝光时长之和,与第一曝光时长相同。在可能的实现的方式中,分割得到的曝光时长之后,可以小于第一曝光时长。即在对第一曝光时长进行分割时,可以截取其中的部分时段,且截取的曝光时长之间包括预先设定的时间间隔。
或者,在可能的实现方式中,所截取的曝光时长也可以为不同时长。
将所得到的截取图像进行融合处理时,可以先对截取图像进行区域划分。比如,可以将截取图像划分为运动区域和非运动区域,根据所划分的区域的不同,按照不同的融合方式进行融合处理。其中,运动区域为包括运动目标的区域,图像中的运动区域之外的其它区域则为非运动区域。
将截取图像的非运动区域进行融合时,可以通过Alpha融合、多频段融合等融合方式,将多个截取图像的非运动区域融合,得到非运动区域的融合图像。
对于运动区域的图像,可以选择所生成的截取图像中的任意一个来确定运动区域的图像。将所确定的运动区域的图像与所融合的非运动区域的图像融合,得到一帧图像。由于对多帧截取图像的非运动区域进行了融合,因此,如果不同帧的非运动区域包括不同的图像质量,可以融合得到更为清晰的非运动区域的图像。对于运动区域,选择其中一帧的截取图像的运动区域,与第一曝光时长中的运动区域的图像相比,由于曝光时长更短,因此,所得到的运动区域的图像的动态模糊更小,图像更为清晰。
在可能的实现方式中,还可以对所述运动区域的图像进行滤波处理。比如,可以通过保边滤波的导向滤波或双向滤波处理,降低运动区域的噪声,提高运动区域的图像质量。或者,也可以通过非局部平均滤波(non-local means,NLM),或者通过高斯滤波减少运动区域的噪声,提升运动区域的图像质量。
所述第一曝光时长可以为电子设备在拍摄稳定画面,即电子设备所拍摄的图像为稳定状态时所采用的曝光时长。
请参考图6,其为本申请实施例提供的一种电子设备的结构示意图。
如图6所示,电子设备200可以包括处理器210,外部存储器接口220,内部存储器221,通用串行总线(universal serial bus,USB)接口230,充电管理模块240,电源管理模块241,电池242,传感器模块280,按键290,马达291,指示器292,摄像头293,显示屏294等。
其中传感器模块280可以包括压力传感器,陀螺仪传感器,加速度传感器,距离传感 器,接近光传感器,温度传感器,触摸传感器,环境光传感器等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备200的具体限定。在本申请另一些实施例中,电子设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备200的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210中的存储器为高速缓冲存储器。该存储器可以保存处理器210刚用过或循环使用的指令或数据。如果处理器210需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器210的等待时间,因而提高了系统的效率。
在一些实施例中,处理器210可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI)和/或通用输入输出(general-purpose input/output,GPIO)接口等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备200的结构限定。在本申请另一些实施例中,电子设备200也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块240用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。
电源管理模块241用于连接电池242,充电管理模块240与处理器210。电源管理模块241接收电池242和/或充电管理模块240的输入,为处理器210,内部存储器221,外部存储器,显示屏294和摄像头293等供电。电源管理模块241还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。
电子设备200通过GPU,显示屏294,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏294和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器210可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏294用于显示图像,视频等。显示屏294包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emittingdiode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrixorganic light emitting diode的,AMOLED),柔性发光二极管(flex light-emittingdiode, FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备200可以包括1个或N个显示屏294,N为大于1的正整数。
电子设备200可以通过ISP,摄像头293,视频编解码器,GPU,显示屏294以及应用处理器等实现拍摄功能。
ISP用于处理摄像头293反馈的数据。例如,电子设备拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头293中。
摄像头293用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备200可以包括1个或N个摄像头293,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备200在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
外部存储器接口220可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备200的存储能力。外部存储卡通过外部存储器接口220与处理器210通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器210通过运行存储在内部存储器221的指令,从而执行电子设备200的各种功能应用以及数据处理。内部存储器221可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备200使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器221可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
按键290包括开机键,音量键等。按键290可以是机械按键。也可以是触摸式按键。
电子设备200可以接收按键输入,产生与电子设备200的用户设置以及功能控制有关的键信号输入。
马达291可以产生振动提示。马达291可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏294不同区域的触摸操作,马达291也可对应不同的振动反馈效果。
指示器292可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
电子设备200的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构, 或云架构。需要说明的,本申请实施例中,电子设备的操作系统可以包括但不限于
Figure PCTCN2022080722-appb-000001
(Symbian)、
Figure PCTCN2022080722-appb-000002
(Android)、
Figure PCTCN2022080722-appb-000003
(iOS)、
Figure PCTCN2022080722-appb-000004
(Blackberry)、鸿蒙(Harmony)等操作系统,本申请不限定。
本申请实施例以分层架构的Android系统为例,示例性说明电子设备200的软件结构。
图7是本申请实施例的电子设备200的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为五层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,硬件抽象层以及驱动层。
应用程序层可以包括一系列应用程序包。
如图7所示,应用程序包可以包括相机,图库,日历,电话,地图,视频,短信息等应用程序。应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。如图7所示,应用程序框架层可以包括相机应用对应的接口,窗口管理器,内容提供器,视图系统等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:相机服务,媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
下面结合本申请提供的视频拍摄方法,示例性说明电子设备200的工作流程。以图6所示的系统结构示意图为例,说明电子设备200视频拍摄的流程。
如图7所示,硬件抽象层包括摄像头组件(Camera Device3)、图像管道模式组件(Image Pipeline)和软件应用组件(Image Stream Callback MGR),驱动层包括图像传感器(sensor)、图像处理单元前端节点(Front End,ISP-FE)、图像处理单元后端节点(BackEnd,ISP-BE)等。
应用程序层中的相机应用可以以图标的方式显示显示在电子设备的屏幕上,当相机应用的图标被触发,电子设备运行相机应用。相机应用运行在电子设备上,电子设备可以根据用户的操作,向内核层发送相应的触摸事件。内核层将触摸事件转换为原始输入事件,原始输入事件被存储在内核层中。由此一来,当触摸屏接收到触摸事件,启动相机应用,进而通过调用内核层启动摄像头,响应于用户的操作,电子设备处于相机应用中的录像模式。
如图7所示,图像管道模式组件Image Pipeline包括零延时处理器(zero shutter lag Manager,ZSL Manager),FE节点(Node),BE-Streaming后端图像流节点(Node),BE-Snapshot后端图像快照节点(Node),互联网协议族(Internet Protocol Suite,IPS)(或理解为管道模式中的管道过滤模型),承载平台算法的存储器等。
其中,ZSL处理器用于在相机应用运行时提供预览图像,ZSL处理器设置在raw域图像的历史帧容留器中。ZSL Manager可以用于管理ZSL获取的预览图像流,并可以对预览图像流进行配置、排队,选帧等操作。FE Node是管道模式的根节点,即电子设备采集的所有图像处理的原始节点,可以作为硬件ISP芯片的前端处理器,可用于将第一曝光时长分割后得到的两个或两个以上的曝光时长所对应的截取图像进行融合处理,得到融合后的一帧图像。
后端图像流节点(BE-Streaming-Node)用于处理预览图像流。例如,电子设备处于录像模式下的预览图像流,以及电子设备处于录像状态时的预览图像流等。在一些实施例中,还可以挂载IPS,即后端图像流节可以访问ISP处理的预览图像。挂载是指操作系统允许电子设备访问存储设备上的文件的过程。
后端快照节点(BE-Snapshot-Node)用于处理录像相关的图像。在一些实施例中,还可以挂载IPS,即后端快照节点可以访问ISP处理的快照图像。IPS是HAL层的管道过滤模型,IPS中可以设置插件,插件可以用于访问存储设备存储的算法。IPS访问算法后可以用于接管相机预览、相机拍照动作和录像模式下的数据等。在一些实施例中,IPS可以与相机的框架层和HAL交互,以实现对应的功能。
Algo是一种图像处理算法的模块,可以供IPS挂载。在一些实施例中,Algo中可以包括算法处理模块,当Algo调用算法处理模块,运行算法时可以访问CPU、GPU和NPU等处理器。
图像传感器(sensor)用于采集图像,用于负责硬件传感器的上电或下电时序图、还用于匹配控制、实时图像传感器配置和复位功能。其中,录像模式的曝光方式所采用的第一曝光时长截取两个或两个以上的曝光时长进行曝光控制的方式可以通过设置图像传感器实现。
示例性的,电子设备处于相机应用中的录像模式,相机应用调用应用框架层中对应的接口,通过调用内核层启动摄像头驱动,开启电子设备的摄像头,并通过摄像头采集图像。其中,录像模式对应的曝光方式包括:第一曝光模式,按照预设的第一曝光时长进行曝光;第二曝光模式,从第一曝光时长截取多个曝光时长分别进行曝光。电子设备的摄像头按照录像模式所对应的曝光方式曝光,图像传感器采集图像。ZSL Manager中保存所截取的曝光时长对应的两帧或两帧以上的图像。电子设备中的FE Node可以处理ZSL Manager中的两帧或两帧以上的图像,以生成预览图像流,电子设备显示预览图像。当电子设备拍摄键 (或拍摄控件)接收到触发操作,FE Node将根据所读出的两个或两个以上的截取图像,融合生成一帧图像,通过显示屏显示该融合图像对应的视频。
需要说明的,本申请实施例中的电子设备可以是具有拍照功能的手机、运动相机(GoPro)、数码相机、平板电脑、桌面型、膝上型、手持计算机、笔记本电脑、车载设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等,本申请实施例对该电子设备的具体形态不作特殊限制。
示例性的,以电子设备包括录像模式,录像模式下电子设备可以按照第一曝光时长所截取的曝光时长采集两帧或两帧以上的图像,并融合生成一帧图像用于生成视频。如图8所示,其示出电子设备生成视频的流程。
如图8所示为电子设备的拍照流程在硬件抽象层和框架层之间的运行过程。其中,框架层(framework)包括相机应用(APP1),相机服务(camera service),显示合成系统(surface flinger);硬件抽象层(HAL)包括相机驱动3(camera device3),第一曝光方式,软件应用流(APP Stream),FE节点(Node),BE-Streaming后端图像流节点(Node)和ZSL处理器。
其中,相机应用被触发,相机应用下发相机运行请求,且相机应用的模式为录像模式。框架层中的相机服务被触发,并向硬件抽象层中相机对应的相机驱动。相机驱动调用第一曝光模式或第二曝光模式进行曝光。相机驱动根据所调用的第一曝光模式或第二曝光模式,提交拍照请求至FE节点。ZSL处理器中包括图像队列VC0,第一曝光模式指示电子设备按照第一曝光时长曝光,当电子设备按照第一曝光时长下采集的图像形成VC0图像队列。第二曝光模式指示电子设备按照第二曝光时长曝光,当电子设备按照第二曝光时长下采集的图像形成VC0图像队列。
以电子设备显示预览图像的过程为例,FE节点将采集得到的拍照队列传输至BE-Streaming后端图像流节点。ZSL处理器将VC0图像队列传输至图像管道,图像管道将图像队列通过软件应用流传输至框架层。这样一来,框架层接收到来自硬件抽象层的反馈数据,通过显示合成系统将图像显示在显示屏。
以下实施例中的方法均可以在具备上述硬件结构的电子设备中实现。
本申请实施例以电子设备是手机为例,手机中安装相机应用。请参考图9,为本申请实施例提供的拍照方法的流程图。如图9所示,该方法包括S901-S902。
其中,手机运行相机应用,手机采用相机应用中的录像模式进行图像采集。本申请实施例以为例,说明本申请实施例提供的视频拍摄方法。
需要说明的,相机中的录像模式仅是对应采用第二曝光方式获取图像。在实际应用中,应用的名称可能不同,进入录像功能的形式可能存在不同,比如直播应用程序中对应进入直播模式,即时通信应用程序中对应进入视频通话模式等。
在S901中,手机打开相机应用中的录像模式。
需要说明的是,当手机运行相机应用,相应应用可以包括多种拍摄模式。不同的拍照模式得到的图像的效果不同。例如,相机应用包括人像模式、夜景模式、录像模式等。人像模式下得到的图像中人物的面部特征明显,夜景模式下得到的图像清晰度高,录像模式 下可以进行视频拍摄,生成所拍摄的图像的视频文件。每种模式也是在对应的场景下使用的。本申请实施例以录像模式为例,说明本申请实施例提供的视频拍摄方法。可以理解的是,该视频拍摄方法并不是仅适用于录像模式,还可以用于实时显示视频图像的场景,比如用于视频通话,或者用于直播,或者用于实时的图像预览等。
在本申请实施例中,相机生成视频图像或者生成预览图像的曝光时长,包括第一曝光时长和第二曝光时长,第二曝光时长可以为从第一曝光时长中所截取的两个或两个以上的曝光时长。
其中,相机可以根据手机当前的拍摄状态,确定采用第一曝光时长或第二曝光时长,进行图像的曝光和采集。
所述拍摄状态,是指手机通过相机应用进行图像采集时,通过手机,以及手机所采集的图像中的物体之间的相对状态。手机与所采集的图像中的物体之间的相对状态,可以包括相对运动的状态和相对静止的状态。当手机与采集的图像中的物体之间为相对运动的状态,则可以将当前的拍摄状态描述为运动状态。如果手机与采集的图像中的物体之间为相对静止的状态,则可以将当前的拍摄状态描述为稳定状态。下面对拍摄状态的应用场景介绍如下:
场景一:
运动用户拍摄静止物体。此时,所拍摄的图像中的物体(包括山峰、树木、岩石、建筑等)本身处于静止状态(相对于地球处于静止状态)。为了获得更多的拍摄信息,用户通过快速跑动或快速转动、乘坐交通工具或者乘坐专门的拍摄移动工具快速移动,使得手机相对于所采集的图像中的物体处于相对运动的状态。在此场景下,手机的拍摄状态可能为运动状态。
场景二:
静止用户拍摄运动物体。比如,用户使用手机拍摄海浪、拍摄运动员的运动过程、拍摄喷发的火山、流动的人群等。此时,所拍摄的图像中物体包括运动物体。此时,所拍摄的图像中的运动物体,包括如运动的海浪、运动员、火山和人群,相对于手机处于运动状态,手机的拍摄状态可能为运动状态。
场景三:
静止用户拍摄静止物体。用户使用手机拍摄静止的画面内容,比如用户慢速的拍摄风景视频,包括山峰、树木、岩石、建筑等,手机相对于所采集的图像中的相对运动的速度较小,此时,手机的拍摄状态可能为稳定状态。
场景四:
运动用户拍摄运动物体。此时,快速移动的用户所拍摄的物体,与用户可能处于相对静止状态。比如,A、B两辆汽车处于相对静止的行驶状态,用户乘坐在汽车A,通过手机对汽车B进行视频拍摄。此时手机相对于所拍摄的图像中的物体,即汽车B,处于相对静止状态,手机的拍摄状态可能为稳定状态。或者,快速移动的用户拍摄运动物体时,手机相对于所拍摄的运动物体处于相对运动的状态,此时,手机的拍摄状态可能为运动状态。
当手机的拍摄状态为运动状态时,在同一帧图像中,曝光时长越短,在该曝光时长内,有相对运动的物体的相对运动距离越长,在该曝光时长内所曝光的图像出现运动模糊的几率越大。因此,在手机的拍摄状态为运动状态时,通常选用第二曝光时长进行曝光和图像 采集。当手机的拍摄状态为稳定状态时,为了提高图像采集效率,减少图像读出次数,可以选用第一曝光时长进行图像采集。
其中,第一曝光时长可以根据所采集的视频的帧率确定。
比如,预先设定的视频的帧率为30帧/秒。即所生成的视频中,一秒钟需要播放30帧图像。相应的,在进行视频图像采集时,采集一帧图像的时长为1/30秒,即33.3毫秒。通过第一曝光时长进行图像采集时,可以采用非重叠(non-overlapped)曝光模式进行曝光和图像采集,即相邻的图像生成周期在时间上不重叠,即生成任意帧图像的第一曝光时长和读出数据的时长与其它帧的第一曝光时长和读出数据的时长不重叠。可以结合曝光模式(非重叠曝光模式和重叠曝光模式)的参数,包括曝光间隔时长、读出数据的时长来确定第一曝光时长。比如,手机所设定的曝光模式为重叠曝光模式。在该重叠曝光模式下,在前一帧曝光完成后即开始下帧图像的曝光,读出数据的时长为3.3毫秒,那么第一曝光时长为30毫秒。
所述第二曝光时长,即对第一曝光时长进行时间截取所得到的曝光时长。
所述时间截取,可以包括无间隔的时间截取、有间隔的时间截取、等时长截取、非等时长截取等。
在可能的实现方式中,如图10所示,第一曝光时长与所截取得到的两个或两个以上的第二曝光时长之和相等。其中,两个或两个以上的第二曝光时长可以为时长相等的曝光时长。在第二曝光时长为时长相等的曝光时长时,可以使得由两个或两个以上的第二曝光时长所对应得到的两个或两个以上的截取图像的亮度信息相同,可更为方便对所得到的两个或两个以上的截取图像之间的配准和比较。其中,截取图像即为第二曝光时长进行曝光时所得到的对应的图像。
比如,图10所示,第一曝光时长为30毫秒,第二曝光时长为10毫秒,且第二曝光时长的数量为3个,即10毫秒*3=30毫秒。
在可能的实现方式中,如图10所示,第一曝光时长大于所截取的两个或两个以上的第二曝光时长之和。即在每两个第二曝光时长的时段之间,还包括空闲时段。并且,所截取得到的两个或两个以上的第二曝光时长为时长相等的曝光时长,从而使得由两个或两个以上的第二曝光时长所对应得到的两个或两个以上的截取图像的亮度信息相同,可更为方便对所得到的两个或两个以上的截取图像之间的配准和比较。
比如,图11所示,第一曝光时长为30ms,第二曝光时长为9ms。在每个第二曝光时长之间还包括间隔时长1ms。
比如,第一曝光时长为30毫秒,第二曝光时长为9毫秒,第二曝光时长的数量为3个。9毫秒*3<30毫秒。在每两个第二曝光时长之间,可以包括1毫秒的空闲时段。
在可能的实现方式中,如图12所示,第一曝光时长所截取得到的第二曝光时长可以时长不相等。比如,第一曝光时长为30毫秒。第二曝光时长为时长不等的多个时段,可以包括如8毫秒、10毫秒、12毫秒三个时长。可以理解的是,所截取得到的第二曝光时长,不应局限所截取的第二曝光时长的数量,以及所截取的第二曝光时长大小。另外,所截取的不同的时长的第二曝光时长,还可以包括各个第二曝光时长之和小于第一曝光时长。
在可能的实现方式中,如图13所示,根据第一曝光时长截取得到两个或两个以上的第二曝光时长时,如果第一曝光时长对应的图像采集模式为非重叠模式,或者所述第一曝 光时长对应的图像采集模式为重叠模式,但相邻两个曝光时长之间仍然留有空闲时长时,所生成的两个或两个以上的第二曝光时长之和,可以大于第一曝光时长。
比如图13所示,第一曝光时长为30毫秒,相邻两个第一曝光时长之间相隔大于10毫秒,所生成的三个第二曝光时长可以为相等的11毫秒等。
或者,在可能的实现方式中,各个第二曝光时长可以为时长不同。或者,各个第二曝光时长之间可以包括或不包括空间时长。
在本申请实施例中,手机可以接收用户的设定指令,确定当前所采用的曝光模式为第一曝光模式(采用第一曝光时长进行曝光的模式)或第二曝光模式(采用第二曝光时长进行曝光的模式)。比如,当相机应用进入录像模式时,可以在视频拍摄界面中显示模式选择按键。如图14所示,该模式选择按键可以为一个按键。当接收到用户的点击指令时,电子设备可以控制在第一曝光模式和第二曝光模式之间进行切换。或者,在可能的实现方式中,手机也可以通过模式选择的方式,在视频拍摄界面下选择第一曝光模式和第二曝光模式。
在可能的实现方式中,手机可以根据所检测到的拍摄状态,确定当前所选择的曝光模式(第一曝光模式或第二曝光模式)。根据所检测到的拍摄状态,可以确定手机当前的拍摄状态是否为稳定状态。如果为稳定状态,则选择第一曝光模式,如果为非稳定状态,即运动状态,则选择第二曝光模式。其中,稳定状态和非稳定状态,可以根据所拍摄的视频图像中的动态模糊的程度来确定。当所拍摄的视频图像中出现的动态模糊的程度大于预先设定的模糊阈值时,则认为手机处于非稳定状态。如果拍摄的视频图像中出现的动态模糊的程度小于或等于预先设定的模糊阈值时,则认为手机处于稳定状态。
其中,稳定状态和非稳定状态的参数阈值,可以通过统计数据的方式预先确定,该参数阈值可以包括运动参数阈值和/或图像参数阈值。可以根据拍摄状态的应用场景,统计得到不同应用场景下,确定手机的拍摄状态的定量的参数阈值。可以根据当前所选择的应用场景确定拍摄状态的确定方式,或者也可以直接根据传感数据进行判断,和/或基于图像数据进行拍摄状态的判断。
比如,如图15所示的视频拍摄流程示意图中,确定手机当前的拍摄状态的方式,可以包括基于传感数据进行判断,和/或基于图像数据进行判断。
当拍摄应用场景为场景一时,可以基于传感数据进行拍摄状态的判断和确定。可以通过运动传感器采集手机的传感数据(或者也可以称为运动数据),包括手机的平移速度、平移加速度和/或角位移速度等。可以通过加速度传感器和/或角速度传感器进行运动数据的采集。角速度传感器可以包括陀螺仪等。根据加速度传感器所检测到的加速度值,可以确定手机移动的速度。
根据传感数据确定手机的拍摄状态时,如果检测到手机的平移速度大于预设的第一速度阈值,或者手机的平移加速度大于预设的第一加速度阈值,则确定手机处于非稳定状态。或者,如果检测到手机移动的角位移速度大于预定的第一角速度阈值,则确定手机处于非稳定状态。或者,在检测到手机平移的速度大于预设的第二速度阈值,且角位速度大于预定的第二角速度阈值,手机处于非稳定状态,或者,检测到手机平移的加速度大于预设的第二加速度阈值,且角位移速度大于预设的第二角速度阈值,手机处于非稳定状态。其中,第一速度阈值可以大于第二速度阈值,第一角度速度阈值可以大于第二角速度阈值。或者, 第一速度阈值可以等于第二速度阈值,第一角度速度阈值可以等于第二角速度阈值。其中,第一速度阈值、第二速度阈值、第一角速度阈值、第二角速度阈值可以与第一曝光时长相关,第一曝光时长越长,第一速度阈值、第二速度阈值、第一角速度阈值、第二角速度阈值越小。在可能的实现方式中,第一速度阈值、第二速度阈值可以大于0.05m/s且小于0.3m/s,比如可以为0.1m/s,第一角速度阈值、第二角速度阈值可以大于0.02π/s,且小于0.08π/s,比如可以为0.05π/s,第一加速度阈值、第二加速度阈值可以大于0.05m/s 2,且小于0.3m/s 2,比如可以为0.15m/s 2
当拍摄应用为场景四时,可以传感数据和图像数据进行判断时,如果检测到传感数据大于预定的传感阈值,包括如手机的平移速度大于预定的第一速度阈值,或手机的角位移速度大于预定的第一角速度阈值时,或手机移动的速度大于预设的第二速度阈值,且移动的角速度大于预定的第二角速度阈值,进一步结合图像数据综合判断。如果图像数据判断手机处于稳定状态,可以综合确定手机处于稳定状态。通过综合判断,可以对相对静止状态进行识别,比如可以对用户的乘车状态进行识别,在乘车时所拍摄的画面为相对静止的画面时,可认为手机处于稳定状态。
当拍摄场景为场景二或场景三时,可以基于图像数据确定手机的拍摄状态。可以根据预定时间间隔内所采集的两帧图像进行比较,来确定手机的拍摄状态。其中,预定时间间隔的两帧图像,可以为相邻两帧的图像,或者也可以为其它所设定的时长间隔内所采集的图像,比如间隔100毫秒-500毫秒时长间隔内所采集的图像。
在对所采集的图像进行比较时,可以先确定两帧图像中发生变化的像素点与图像的总的像素点的比值,再将所确定的比值与预先设定的像素点比例阈值进行比较。如果该比值大于预先设定的像素点比例阈值,则确定拍摄两帧图像时的手机处于非稳定状态。否则,确定拍摄两帧图像时的手机处于稳定状态。由于拍摄状态具有一定持续性,因此,可以在接下来的图像采集中,采用第二曝光模式,即选用两个或两个以上的第二曝光时长得到对应的两帧或两帧以上的图像。
比如,所选择的用于比较的相邻两帧图像的像素点总数为N1,发生变化的像素点的数量为N2,预先设定的像素点比例阈值为Y1,如果N2/N1大于或等于Y1,则确定手机处于非稳定状态。如果N2/N1小于Y1,则确定手机处于稳定状态。该像素比例阈值可以为8%-12%中的任意数值,比如可以为10%。
在确定两帧图像中发生变化的像素点时,可以基于像素点的相似度和/或像素点的灰度变化。
基于像素点的相似度进行像素点是否发生变化的判断时,可以确定需要判断的两个像素点的RGB值对应的三维向量,然后计算两个三维向量的距离,即颜色空间的距离的方式,来确定两个像素是否相似。比如,可以将所计算的距离与预先设定的距离阈值进行比较,如果计算的距离大于预先设定的距离阈值,则确定两个像素发生变化。如果小于或等于预先设定的距离阈值,则确定两个像素没有发生变化。
比如,像素1对应的三维向量为(R1,B1,G1),像素2对应的三维向量为(R2,G2,B2),像素1和像素2所对应的三维向量的距离可以表示为:(R1-R2)^2+(G1-G2)^2+(B1-B2)^2的值的平方根,即像素1和像素2在颜色空间的距离。
或者,也可以通过计算两个像素对应的三维向量的夹角的方式,来确定两个像素是否 发生变化。如果两个像素对应三维向量的夹角大于预定的夹角阈值,则确定两个像素发生变化。如果两个像素对应的三维向量的夹角小于或等于预定的夹角阈值,则两个像素未发生变化。
比如,像素1对应的三维向量为(R1,B1,G1),像素2对应的三维向量为(R2,G2,B2),假设l1=sqrt(r1*r1+g1*g1+b1*b1),l2=sqrt(r2*r2+g2*g2+b2*b2),那么夹角a可以表示为:cos(a)=(r1*r2+g1*g2+b1*b2)/(l1*l2)。根据该公式即可计算得到像素1和像素2对应的三维向量的夹角。
或者,还可以将像素的RGB值转换为HSI(色调(Hue)、色饱和度(Saturation)、灰度和亮度(Intensity)值,根据转换后的色调的差异、亮度的差异、灰度的差异和色饱和度的差异,来确定两个像素是否发生变化。比如,可以分别设定相应的阈值来确定需要比较的两个像素是否发生变化。当其中任意一项大于预先设定的对应阈值时,则确定像素发生变化。
在可能的实现方式中,可以根据所获取的预定时长间隔内拍摄的两帧图像的边缘信息来确定手机的拍摄状态。预定时长可以为获取相邻的两帧图像的时间间隔,或者获取两邻M帧图像的时间间隔,M可以为小于10的自然数。
根据预先设定的时长间隔拍摄两帧图像后,对所拍摄的图像进行边缘检测,获取两帧图像中所包括的边缘信息。如果手机和拍摄对象处于相对静止状态,那所拍摄的两帧图像的边缘区域的锐度不会发生明显的变化。因此,可以根据两帧图像的边缘信息的锐度变化阈值来确定图像的边缘是否发生变化。统计图像中发生变化的区域,或者图像中的锐度降低的区域相对于图像的边缘区域的比值,如果该比值大于或等于预定的边缘比例阈值,则确定手机处于非稳定拍摄状态。否则处于稳定拍摄状态。该边缘比例阈值可以为15%-25%中的任意值,比如可以为20%。
本申请在手机打开录像模式时,可以接收用户的设定指令来确定是否采用第二曝光模式进行曝光,或者可以根据手机的拍摄状态,来确定是否采用第二曝光模式进行曝光。在可能的实现方式中,还可以根据启动拍摄的应用类型,或者根据用户的拍摄习惯,确定是否采用第二曝光模式进行曝光。
在S902中,手机显示预览图像,或者接收录像指令,手机显示预览图像和成生视频文件。
在手机打开相机应用中的录像模式之后,根据所设定的曝光模式的确定方式,确定手机当前以第一曝光模式或第二曝光模式进行曝光和读出图像数据。
比如,在图14所示的视频拍摄方法的流程示意图中,基于图像数据和传感数据确定手机当前处于稳定状态或非稳定状态后,根据拍摄状态与曝光模式的对应关系,如果当前的拍摄状态为稳定状态,则选择第一曝光模式进行曝光,比如按照30毫秒的曝光时长进行曝光出帧,即生成视频一帧对应的图像。如果当前的拆除状态为非稳定状态,则可以按照第二曝光模式进行曝光,比如以3个曝光时长分别为10毫秒的曝光时长进行曝光,得到三帧图像。将得到的三帧图像进行融合处理得到视频的一帧图像,或进一步通过ISP(图像信号处理),通过屏幕显示预览图像,或者生成视频文件。
通过第二曝光模式进行图像采集时,由于第二曝光时长小于第一曝光时长,因此,在更短的曝光时长内,第二曝光模式所生成的截取图像包括更少的动态模糊信息,因而具有 更为清晰的显示效果。将显示效果更佳的两个或两个以上的截取图像进行融合处理,可以得到一帧动态模糊性能更优的图像,即清晰度更佳的图像。
在本申请实施例中,为了得到更佳的融合效果,如图15中的图像融合示意图所示,可以将截取图像中的运动区域和非运动区域按照不同的融合方式进行融合。
其中,运动区域可以理解为图像中的发生动态模糊的区域,非运动区域可以理解图像中未发生动态模糊的区域。
其中,两帧或两帧以上截取图像的非运动区域的融合,可以采用Alpha融合、多频段融合等融合方式,将配准变换后的截取图像的非运动区域融合为一帧图像。
在Alpha融合方式中,可以预先设定各个截取图像对应的透明度,或者根据第二曝光时长确定各个截取图像对应的透明度,将各个截取图像与对应的透明度相乘后求和,即可得到融合后的截取图像的非运动区域的图像。
比如,第二曝光模式生成的截取图像包括三帧,分别为P1、P2和P3。三帧图像对应的透明度为a1、a2和a3,则可以确定融合后的非运动区域的图像为:P1*a1+P2*a2+P3*a3。在可能的实现方式中,可以根据截取图像的曝光时长确定截取图像对应的透明度,比如,三帧图像的曝光时长分别为t1、t2和t3,可以确定a1=t1/(t1+t2+t3),a2=t2/(t1+t2+t3),a3=t3/(t1+t2+t3),融合后的非运动区域的图像为:P1*a1+P2*a2+P3*a3。
对于运动区域,可以选择其中任一帧作为运动区域的图像,与非运动区域融合,得到一帧视频中的图像。
在可能的实现方式中,如果所生成的截取图像的数据为三帧或三帧以上,则可以选择中间的截取图像的运动区域,与融合后的非运动区域融合得到一帧视频中的图像。
比如,根据所截取的曝光时长所生成的两帧或两帧以上的截取图像中,如果生成的截取图像为3帧,则可以选择第2帧图像中的运动区域,与融合后的非运动区域进行融合。如果生成的截取图像为4帧,则可以选择第2帧或第3帧图像中的运动区域,与融合后的非运动区域进行融合。
对于所选择的截取图像的运动区域,还可以进一步对该运动区域进行优化处理,可以包括对该帧的运动区域进行滤波处理,包括如通过保边滤波的导向滤波或双向滤波处理,降低运动区域的噪声,提高运动区域的图像质量。或者,也可以通过非局部平均滤波(non-local means,NLM),或者通过高斯滤波减少运动区域的噪声,提升运动区域的图像质量。
本申请实施例可以通过图像比较的方式,确定截取图像中所包括的运动区域和非运动区域。如图16所示,为了提高所确定截取图像中的运动区域和非运动区域的精度,可以在进行图像比较之前,对图像进行匹配和变换。
如图16所示,所生成的截取图像为3帧,分别为第N-1帧,第N帧和第N+1帧,可以将第N帧作为基准帧,将第N+1帧和第N-1帧与该基准帧进行图像配准,从而确定第N+1帧与第N帧配准时,第N+1帧的变换矩阵,以及确定第N-1帧与第N帧配准时,第N-1帧的变换矩阵。
其中,图像配准方法可以包括如平均绝对差算法、绝对误差和算法、误差平方和算法、平均误差平方和算法、归一化积相关算法、序贯相似性检测算法、局部灰度值编码算法等。
在确定第N+1帧相对于基准帧的变换矩阵后,以及第N-1帧相对于基准帧的变换矩阵 后,可以根据所确定的变换矩阵,分别对第N-1帧和第N+1帧进行配准变换,从而使得所生成的截取图像为配准变换后的图像,可以更为准确的进行像素的比较,从而更为有效的确定图像中的运动区域和非运动区域。
在对变换后的截取图像进行像素比较时,如果截取图像所对应的第二曝光时长相同,则可以将变换后的第N+1帧图像(N’+1)、变换后的第N-1帧图像(N’-1),基准图像进行灰度化处理,然后将灰度化处理后的第N+1帧图像、第N-1帧图像的像素,分别与基准图像的像素进行灰度值比较,如果灰度值大于预定的灰度阈值,比如大于预先设定的灰度阈值,比如该灰度阈值可以为30-70中的任意值,则确定该像素在所比较的两帧图像中,确定属于运动区域。否则属于非运动区域。
在可能的实现方式中,如果截取图像所对应的第二曝光时长不同,则可以采用两个像素对应的三维向量的夹角,来确定所比较的像素在两帧比较的图像中,是否属于运动区域或属于非运动区域。根据对变换后的图像的像素与基准图像的像素的比较结果,即可得到由变换后的图像与基准图像所确定的运动区域和非运动区域。
由于曝光时长不同,因此,在进行非运动区域融合时,可以融合更为丰富的场景信息,得到更佳的非运动区域的场景图像。而对于不同曝光时长所得到的截取图像,在确定运动区域的图像时,可以根据曝光时长最短的截取图像中的运动区域,与融合后的非运动区域进行融合,从而使得融合后的图像具有更为清晰的运动区域。
可以的理解的是,图16是以三帧截取图像进行示例说明。在可能的实现方式中,比如生成的截取图像为两帧,分别为第N帧和第N+1帧时。可以确定任意帧,比如第N帧为基准图像。将第N+1与该基准图像进行配准,确定第N+1帧与基准图像之间的变换矩阵,根据该变换矩阵对第N+1帧进行变换,得到变换后的第N+1帧图像(N’+1)。将N’+1与基准图像进行像素比较,确定两帧图像中的发生变化的像素。根据发生变化的像素所在的位置确定运动区域,根据未发生变化的像素所在的位置确定非运动区域。将第N帧图像与N’+1帧图像的非运动区域通过Alpha融合或多频段融合等融合,得到融合后的非运动区域。将第N帧图像的运动区域,或者将N’+1帧的运动区域的图像进行滤波处理后,与融合后的非运动区域融合,得到一帧视频图像。
示例性的,可以对手机进行模块化划分,使手机得到去模糊的视频。如图17所示,手机可以包括采集模块1701、去模糊模块1702。
其中,采集模块1701用于先确定当前拍摄参数下的第一曝光时长。然后以第一曝光时长进行时间段截取,得到两个或个以上的第二曝光时长,由两个或两个以上的第二曝光时长进行曝光和读出数据,得到两个或两个以上的截取图像。
去模糊模块1702用于根据所采集的两个或两个以上的截取图像进行融合处理,得到融合后的图像。装饰融合后的一帧图像为作视频的一帧图像,用于预览显示或生成视频文件。由于第一曝光时长所截取得到的两个或两个以上的截取图像融合为一帧图像,与第一曝光时长得到的一帧图像相比,两者在相同时长内所得到的图像数量也相同,因此,通过曝光时长截取后所得到的融合图像,可以适应手机的视频帧率要求。
为了提升本申请实施例使用的便利性,对手机所划分的模块还可以包括切换模块1703。该切换模块1703可以实时检测手机的拍摄状态。如果检测到手机的拍摄状态为稳定状态,则使用第一曝光时长进行曝光,通过读出数据直接得到一帧图像用于预览显示或用于生成 视频文件中的图像。如果检测到手机的拍摄状态为非稳定状态,则对第一曝光时长进行时间段截取,得到两个或两个以上的第二曝光时长,根据两个或两个以上的曝光时长进行曝光,读出数据得到两个或两个以上的截取图像。
其中,拍摄状态的检测,可以根据手机的传感数据来确定。比如,可以根据手机的加速度传感器和/或角速度传感器,读取手机的传感数据,根据加速度数据确定手机的平移速度。可以手机的平移速度与预先设定的速度阈值进行比较,或者将手机的角速度与预先设定的角速度阈值进行比较,确定手机是否处于稳定状态。
拍摄状态,还可以根据手机所拍摄的图像进行检测。比如,可以选取预定时长间隔的两帧图像的像素变化,来确定是否处于稳定状态。
比如,可以根据两帧图像中发生变化的像素与总的像素的比值来确定是否处于稳定状态。而发生变化的像素的确定,则可以通过计算像素的相似度、像素的差异等方式,来确定所比较的像素是否为发生变化的像素。
或者,还可以根据所比较的两帧图像中的锐度发生变化的边缘区域与总的边缘区域的比例,来确定手机是否处于稳定状态。
在本申请实施例中,对所生成的两帧或两帧以上的截取图像进行融合处理时,可以将待融合的截取图像进行区域划分,比如可以划分为运动区域和非运动区域,将非运动区域的多帧图像进行叠加方式进行融合。对于运动区域,则可以选取其中一帧图像的运动区域,与融合后的非运动区域融合,得到视频的一帧图像。
可以理解的是,本申请实施例提供的电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
本申请实施例可以根据上述方法示例对上述电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用集成的单元的情况下,图18示出了上述实施例中所涉及的电子设备的一种可能的结构示意图。该电子设备200包括:处理单元1801、显示单元1802和存储单元1803。
处理单元1801,用于对电子设备的动作进行管理。例如,处理单元1801可以控制电子设备处于录像模式下的曝光方式,处理单元1801还可以控制电子设备显示屏的显示内容等。
显示单元1802,用于显示电子设备的界面。例如,显示单元1802可以用于显示电子设备处于录像模式下的主界面,显示单元1802用于显示录像模式的预览图像等。
存储单元1803用于保存电子设备200的程序代码和数据。例如,电子设备处于录像模式下,存储单元1803可以缓存电子设备预览图像,存储单元1803还用于存储录像模式中的图像处理算法等。
当然,上述电子设备200中的单元模块包括但不限于上述处理单元1801、显示单元1802和存储单元1803。例如,电子设备200中还可以包括传感器单元、通信单元等。传感器单元可以包括光照传感器,以采集电子设备所在环境中的光照强度。通信单元用于支持电子设备200与其他装置的通信。
其中,处理单元1801可以是处理器或控制器,例如可以是中央处理器(central processing unit,CPU),数字信号处理器(digital signal processor,DSP),专用集成电路(applicationspecific integrated circuit,ASIC),现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。处理器可以包括应用处理器和基带处理器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。存储单元1803可以是存储器。音频单元可以包括麦克风、扬声器、受话器等。通信单元可以是收发器、收发电路或通信接口等。
例如,处理单元1801为处理器(如图6所示的处理器210),显示单元1802可以为显示屏(如图6所示的显示屏294,该显示屏294可以为触摸屏,该触摸屏中可以集成显示面板和触控面板),存储单元1803可以为存储器(如图6所示的内部存储器221)。
本申请实施例还提供一种芯片系统,该芯片系统包括至少一个处理器和至少一个接口电路。处理器和接口电路可通过线路互联。例如,接口电路可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路可用于向其它装置(例如处理器)发送信号。示例性的,接口电路可读取存储器中存储的指令,并将该指令发送给处理器。当所述指令被处理器执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中手机执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个 不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (13)

  1. 一种视频拍摄方法,其特征在于,应用于电子设备,所述方法包括:
    所述电子设备确定当前的拍摄状态为运动状态,获取当前拍摄参数中的第一曝光时长;
    所述电子设备对所述第一曝光时长进行时间截取获得截取图像,所述截取图像包括两帧或两帧以上的图像,且所述截取图像与时间截取得到的曝光时长对应;
    所述电子设备将所述截取图像融合为一帧图像,根据融合后的图像生成视频。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述电子设备确定当前的拍摄状态为稳定状态,所述电子设备读取第一曝光时长对应的图像生成视频。
  3. 根据权利要求1或2所述的方法,其特征在于,所述电子设备确定当前的拍摄状态,包括:
    所述电子设备通过运动传感器采集所述电子设备的传感数据;
    根据所述传感数据确定所述电子设备当前的拍摄状态。
  4. 根据权利要求1或2所述的方法,其特征在于,所述电子设备确定当前的拍摄状态,包括:
    所述电子设备获取在预定时长间隔内所拍摄的两帧图像;
    所述电子设备确定在所述两帧图像中发生变化的像素点;
    当发生变化的像素点与图像总的像素点的比值大于或等于预定的像素点比例阈值,所述电子设备的拍摄状态为运动状态;
    当发生变化的像素点与图像总的像素点的比值小于预定的像素点比例阈值,所述电子设备的拍摄状态为稳定状态。
  5. 根据权利要求4所述的方法,其特征在于,所述电子设备确定在所述两帧图像中发生变化的像素点,包括:
    当所述两帧图像中的像素点的颜色的相似度大于预设的相似度阈值,所述电子设备确定该像素点为发生变化的像素点;
    或者,当所述两帧图像中的像素点对应的灰度值的差值大于预定的灰度阈值,所述电子设备确定该像素点为发生变化的像素点。
  6. 根据权利要求1或2所述的方法,其特征在于,所述电子设备确定当前的拍摄状态,包括:
    所述电子设备获取在预定时长间隔内所拍摄的两帧图像;
    对所述两帧图像进行边缘检测,确定两帧图像的边缘的锐度发生变化的区域;
    如果锐度发生变化的区域与边缘区域的比值大于或等于预定的边缘比例阈值,所述电子设备的拍摄状态为运动定状态;
    如果锐度发生变化的区域与边缘区域的比值小于预定的边缘比例阈值,所述电子设备的拍摄状态为稳定状态。
  7. 根据权利要求1所述的方法,其特征在于,所述电子设备将所述截取图像融合为一帧图像,包括:
    所述电子设备确定所述截取图像中的运动区域和非运动区域;
    所述电子设备将所述截取图像中的非运动区域的图像进行融合,结合预先确定的所述截取图像中的指定图像的运动区域的图像,生成一帧图像。
  8. 根据权利要求7所述的方法,其特征在于,所述电子确定所述截取图像中的运动区域和非运动区域,包括:
    所述电子设备将所述截取图像进行配准变换,得到基准图像和变换图像;
    所述电子设备计算所述变换图像和所述基准图像之间的像素差;
    当所述像素差大于或等于预定的像素差阈值时,所述电子设备确定该像素差对应的像素点属于运动区域;
    当所述像素差小于预定的像素差阈值时,所述电子设备确定该像素差对应的像素点属于非运动区域。
  9. 根据权利要求8所述的方法,其特征在于,所述电子设备将所述截取图像进行配准变换,得到基准图像和变换图像,包括:
    所述电子设备将所述截取图像中的其中一个图像确定为基准图像;
    所述电子设备将所述截取图像中的其它图像与所述基准图像进行配准处理,确定其它图像与所述基准图像之间的变换矩阵;
    所述电子设备根据所确定的变换矩阵对所述其它图像进行图像变换,得到变换图像。
  10. 根据权利要求9所述的方法,其特征在于,所述电子设备将所述截取图像中的其中一个图像确定为基准图像,包括:
    当所述截取图像包括三个或三个以上的图像时,所述电子设备将所述截取图像中的中间位置的图像确定为基准图像。
  11. 根据权利要求7所述的方法,其特征在于,在生成一帧图像之后,所述方法还包括:
    对生成的一帧图像的运动区域进行滤波处理,得到滤波后的图像。
  12. 一种电子设备,其特征在于,所述电子设备包括:
    摄像头,用于采集图像;
    显示屏,用于显示所采集的图像;
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求1-11中任一项所述的视频拍摄方法。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机指令,当所述计算机指令在计算机上运行时,使得所述计算机执行如权利要求1-11中任一项所述的视频拍摄方法。
PCT/CN2022/080722 2021-06-25 2022-03-14 视频拍摄方法、电子设备及计算机可读存储介质 WO2022267565A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110714588.7A CN113592887B (zh) 2021-06-25 2021-06-25 视频拍摄方法、电子设备及计算机可读存储介质
CN202110714588.7 2021-06-25

Publications (1)

Publication Number Publication Date
WO2022267565A1 true WO2022267565A1 (zh) 2022-12-29

Family

ID=78244797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080722 WO2022267565A1 (zh) 2021-06-25 2022-03-14 视频拍摄方法、电子设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN113592887B (zh)
WO (1) WO2022267565A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116471429A (zh) * 2023-06-20 2023-07-21 上海云梯信息科技有限公司 基于行为反馈的图像信息推送方法及实时视频传输系统
CN116506732A (zh) * 2023-06-26 2023-07-28 浙江华诺康科技有限公司 一种图像抓拍防抖的方法、装置、系统和计算机设备

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592887B (zh) * 2021-06-25 2022-09-02 荣耀终端有限公司 视频拍摄方法、电子设备及计算机可读存储介质
EP4261771A1 (en) * 2021-12-31 2023-10-18 Honor Device Co., Ltd. Image processing method and related electronic device
CN114302068B (zh) * 2022-01-06 2023-09-26 重庆紫光华山智安科技有限公司 图像拍摄方法及设备
CN114612360B (zh) * 2022-03-11 2022-10-18 北京拙河科技有限公司 基于运动模型的视频融合方法及系统
CN114979504B (zh) * 2022-05-25 2024-05-07 深圳市汇顶科技股份有限公司 相机拍摄参数确定方法、装置及存储介质
CN115278047A (zh) * 2022-06-15 2022-11-01 维沃移动通信有限公司 拍摄方法、装置、电子设备和存储介质
CN115734088A (zh) * 2022-10-28 2023-03-03 深圳锐视智芯科技有限公司 一种图像果冻效应消除方法及相关装置
CN117994368A (zh) * 2022-11-02 2024-05-07 华为终端有限公司 一种图像处理方法及电子设备
CN115689963B (zh) * 2022-11-21 2023-06-06 荣耀终端有限公司 一种图像处理方法及电子设备
CN115953422B (zh) * 2022-12-27 2023-12-19 北京小米移动软件有限公司 边缘检测方法、装置及介质
CN116579964B (zh) * 2023-05-22 2024-02-02 北京拙河科技有限公司 一种动帧渐入渐出动态融合方法及装置
CN117201930B (zh) * 2023-11-08 2024-04-16 荣耀终端有限公司 一种拍照方法和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101300830A (zh) * 2005-09-14 2008-11-05 诺基亚公司 用于实现运动驱动的多张拍摄图像稳定性的系统和方法
US20090033750A1 (en) * 2007-08-02 2009-02-05 Texas Instruments Incorporated Method and apparatus for motion stabilization
CN102833471A (zh) * 2011-06-15 2012-12-19 奥林巴斯映像株式会社 摄像装置和摄像方法
CN104754212A (zh) * 2013-12-30 2015-07-01 三星电子株式会社 电子装置以及通过使用该电子装置捕获移动对象的方法
CN110121882A (zh) * 2017-10-13 2019-08-13 华为技术有限公司 一种图像处理方法及装置
CN113592887A (zh) * 2021-06-25 2021-11-02 荣耀终端有限公司 视频拍摄方法、电子设备及计算机可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303727B (zh) * 2008-07-08 2011-11-23 北京中星微电子有限公司 基于视频人数统计的智能管理方法及其系统
CN110035141B (zh) * 2019-02-22 2021-07-09 华为技术有限公司 一种拍摄方法及设备
CN112738414B (zh) * 2021-04-06 2021-06-29 荣耀终端有限公司 一种拍照方法、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101300830A (zh) * 2005-09-14 2008-11-05 诺基亚公司 用于实现运动驱动的多张拍摄图像稳定性的系统和方法
US20090033750A1 (en) * 2007-08-02 2009-02-05 Texas Instruments Incorporated Method and apparatus for motion stabilization
CN102833471A (zh) * 2011-06-15 2012-12-19 奥林巴斯映像株式会社 摄像装置和摄像方法
CN104754212A (zh) * 2013-12-30 2015-07-01 三星电子株式会社 电子装置以及通过使用该电子装置捕获移动对象的方法
CN110121882A (zh) * 2017-10-13 2019-08-13 华为技术有限公司 一种图像处理方法及装置
CN113592887A (zh) * 2021-06-25 2021-11-02 荣耀终端有限公司 视频拍摄方法、电子设备及计算机可读存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116471429A (zh) * 2023-06-20 2023-07-21 上海云梯信息科技有限公司 基于行为反馈的图像信息推送方法及实时视频传输系统
CN116471429B (zh) * 2023-06-20 2023-08-25 上海云梯信息科技有限公司 基于行为反馈的图像信息推送方法及实时视频传输系统
CN116506732A (zh) * 2023-06-26 2023-07-28 浙江华诺康科技有限公司 一种图像抓拍防抖的方法、装置、系统和计算机设备
CN116506732B (zh) * 2023-06-26 2023-12-05 浙江华诺康科技有限公司 一种图像抓拍防抖的方法、装置、系统和计算机设备

Also Published As

Publication number Publication date
CN113592887A (zh) 2021-11-02
CN113592887B (zh) 2022-09-02

Similar Documents

Publication Publication Date Title
WO2022267565A1 (zh) 视频拍摄方法、电子设备及计算机可读存储介质
US20230276136A1 (en) Photographing method, electronic device, and storage medium
CN114205522B (zh) 一种长焦拍摄的方法及电子设备
EP4224831A1 (en) Image processing method and electronic device
WO2023015981A1 (zh) 图像处理方法及其相关设备
WO2021223500A1 (zh) 一种拍摄方法及设备
US20230043815A1 (en) Image Processing Method and Electronic Device
CN113709355B (zh) 滑动变焦的拍摄方法及电子设备
CN115689963B (zh) 一种图像处理方法及电子设备
US20210409588A1 (en) Method for Shooting Long-Exposure Image and Electronic Device
WO2024031879A1 (zh) 显示动态壁纸的方法和电子设备
CN116320783B (zh) 一种录像中抓拍图像的方法及电子设备
WO2023035921A1 (zh) 一种录像中抓拍图像的方法及电子设备
CN116916151B (zh) 拍摄方法、电子设备和存储介质
CN115633262B (zh) 图像处理方法和电子设备
WO2023160230A9 (zh) 一种拍摄方法及相关设备
CN113891008B (zh) 一种曝光强度调节方法及相关设备
CN115150542B (zh) 一种视频防抖方法及相关设备
WO2023035868A1 (zh) 拍摄方法及电子设备
CN116055863B (zh) 一种相机的光学图像稳定装置的控制方法及电子设备
CN116723417B (zh) 一种图像处理方法和电子设备
WO2024046162A1 (zh) 一种图片推荐方法及电子设备
CN117857915A (zh) 一种拍照方法、拍照装置及电子设备
CN116664701A (zh) 光照估计方法及其相关设备
CN116709042A (zh) 一种图像处理方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22827070

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE