WO2023185127A1 - Image processing method and electronic device - Google Patents

Image processing method and electronic device Download PDF

Info

Publication number
WO2023185127A1
WO2023185127A1 PCT/CN2022/140810 CN2022140810W WO2023185127A1 WO 2023185127 A1 WO2023185127 A1 WO 2023185127A1 CN 2022140810 W CN2022140810 W CN 2022140810W WO 2023185127 A1 WO2023185127 A1 WO 2023185127A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
center point
preview image
area
electronic device
Prior art date
Application number
PCT/CN2022/140810
Other languages
French (fr)
Chinese (zh)
Inventor
陈国乔
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210318644.XA external-priority patent/CN116939363B/en
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023185127A1 publication Critical patent/WO2023185127A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment

Definitions

  • the present application relates to the field of image processing, specifically, to an image processing method and electronic equipment.
  • zoom capabilities can include optical zoom (optical zoom) or digital zoom (digital zoom), etc.; optical zoom refers to the function of zooming through the camera module.
  • the lens moves to enlarge or shrink the scene being photographed; digital zoom achieves the effect of magnifying distant scenes by increasing the area of each pixel of the image; for optical zoom or digital zoom, the image sensor in the electronic device is usually used.
  • the center is the center point of zoom.
  • This application provides an image processing method and an electronic device, which can realize automatic zooming of a target area in a photographed scene without the electronic device moving; thereby improving the user's shooting experience.
  • an image processing method is provided.
  • the image processing method is applied to electronic devices and includes:
  • the zoom magnification corresponding to the first preview image is the first magnification
  • the center point of the first preview image is the first center point
  • the second center point is the center point of the target area, and the first center point does not coincide with the second center point;
  • a second preview image is displayed, and the center point of the second preview image is the second center point.
  • target area in the first preview image may refer to an image area in the first preview image that the user is interested in, or may refer to an image area in the first preview image that needs to be tracked for zooming.
  • the image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; for example, during the entire zoom shooting process, there is no need to collect the zoom center to always be the center of the sensor, but to change the zoom center from the sensor
  • the center of the camera smoothly transitions to the center of the target area in the shooting scene; allowing users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices, improving the user's shooting experience.
  • the second preview image coincides with the target area.
  • the second preview image coinciding with the target area may mean that the second preview image partially coincides with the target area, or the second preview image completely coincides with the target area.
  • the second preview image includes the target area.
  • the second preview image includes a part of the target area.
  • the position of the electronic device is the same.
  • tracking zoom can be implemented for the target area in the first preview image, thereby improving the user's shooting experience.
  • some implementations of the first aspect also include:
  • a second operation is detected, the second operation refers to the zoom magnification of the electronic device being a third magnification;
  • a third preview image is displayed, the center point of the third preview image is a third center point, and the third center point is between the first center point and the second center point. Online.
  • the electronic device before detecting the first operation, may also detect a second operation.
  • the second operation indicates that the zoom magnification of the electronic device is a third magnification, and the third magnification is greater than the first magnification and less than the second magnification. magnification; that is, during the zooming process of the electronic device, the zoom center can move from the first center point to the third center point, and then to the second center point; thereby avoiding the jump of the second preview image and achieving smooth zoom.
  • connection between the first center point and the second center point includes N center points, and each of the N center points A point corresponds to at least one zoom factor, and N is an integer greater than or equal to 2.
  • the N center points include a first center point, a second center point and N-2 center points; wherein, the N-2 center points are located between the first center point and the second center point.
  • each of the N-2 center points can correspond to one zoom factor; the second center point can correspond to at least one zoom factor.
  • N center points may be included between the first center point and the second center point, and each center point may correspond to a zoom magnification, thereby achieving the transformation of the zoom center from the first center point to the second center point.
  • the smooth transition of points allows users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices.
  • some implementations of the first aspect also include:
  • the line connecting the first center point and the second center point is equally divided to obtain the N center points.
  • connection line between the first center point and the second center point can be equally divided, so that during the zooming process, the zoom center can be moved from the first center point to the second center point.
  • the smooth transition of points allows users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices.
  • some implementations of the first aspect also include:
  • the line connecting the first center point and the second center point is divided according to an interpolation algorithm to obtain the N center points.
  • connection line between the first center point and the second center point can be divided through an interpolation algorithm, so that during the zooming process, the zoom center can be moved from the first center point to the second center point.
  • the smooth transition between the two center points allows users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices.
  • displaying a second preview image in response to the first operation includes:
  • the second preview image is displayed using a second pixel combining method.
  • the ratio between the area of the target area and the area of the first preview image is less than or equal to the first preset threshold, since the field of view of the electronic device is small, the acquired pixels The number of points is small; displaying the second preview image through the first pixel merging method can increase the number of corresponding pixels in the image, thereby improving the clarity of the image.
  • the first pixel binning method may refer to using the Remosaic method to read the image; the second pixel binning method may refer to using the Binning method to read the image.
  • displaying the second preview image using a first pixel merging method includes:
  • a first image area is obtained, and the first image area includes M pixels;
  • the M pixels are rearranged to obtain K pixels, M and K are both positive integers, and K is greater than M;
  • the second preview image is displayed based on the K pixels.
  • the ratio between the area of the target area and the area of the first preview image is less than or equal to the first preset threshold, since the field of view of the electronic device is small, the acquired pixels The number of points is small; displaying the second preview image through the first pixel merging method can increase the number of corresponding pixels in the image, thereby improving the clarity of the image.
  • displaying the second preview image using a first pixel merging method includes:
  • a first image area is obtained, and the first image area includes M pixels;
  • the M pixels are merged to obtain H pixels, M and H are both positive integers, and H is less than M;
  • a second preview image is displayed based on the H pixels.
  • the ratio between the area of the target area and the area of the first preview image is greater than the first preset threshold, since the field of view of the electronic device is large, the number of pixels acquired is More; displaying the second preview image through the second pixel merging method can reduce the calculation load of the electronic device and improve the performance of the electronic device.
  • determining the second center point in the first preview image includes:
  • the user's click operation on the first preview image is detected, and the second center point is the touch point between the user and the electronic device.
  • determining the second center point in the first preview image includes:
  • the first subject in the first preview image is detected, and the second center point is the center point of the first subject.
  • the second magnification is determined based on a ratio between an area of the target area and an area of the first preview image.
  • the second magnification is 2 magnification.
  • the second magnification is 3 magnification.
  • the second magnification is 4x magnification.
  • an electronic device in a second aspect, includes: one or more processors, a memory, and a display screen; the memory is coupled to the one or more processors, and the memory is used to store a computer Program code, the computer program code comprising computer instructions invoked by the one or more processors to cause the electronic device to perform:
  • the zoom magnification corresponding to the first preview image is the first magnification
  • the center point of the first preview image is the first center point
  • the second center point is the center point of the target area, and the first center point does not coincide with the second center point;
  • a second preview image is displayed, and the center point of the second preview image is the second center point.
  • the second preview image coincides with the target area.
  • the second preview image includes the target area.
  • the second preview image includes a part of the target area.
  • the position of the electronic device is the same.
  • the one or more processors invoke the computer instructions to cause the electronic device to execute:
  • a third operation is detected, the third operation refers to the zoom magnification of the electronic device being a third magnification;
  • a third preview image is displayed, the center point of the third preview image is a third center point, and the third center point is between the first center point and the second center point. Online.
  • connection between the first center point and the second center point includes N center points, and each center point in the N center points A point corresponds to at least one zoom factor, and N is an integer greater than or equal to 2.
  • the one or more processors invoke the computer instructions to cause the electronic device to execute:
  • the line connecting the first center point and the second center point is equally divided to obtain the N center points.
  • the one or more processors invoke the computer instructions to cause the electronic device to execute:
  • the line connecting the first center point and the second center point is divided according to an interpolation algorithm to obtain the N center points.
  • the one or more processors invoke the computer instructions to cause the electronic device to execute:
  • the second preview image is displayed using a second pixel combining method.
  • the one or more processors invoke the computer instructions to cause the electronic device to execute:
  • a first image area is obtained, and the first image area includes M pixels;
  • the M pixels are rearranged to obtain K pixels, M and K are both positive integers, and K is greater than M;
  • the second preview image is displayed based on the K pixels.
  • the one or more processors invoke the computer instructions to cause the electronic device to execute:
  • a first image area is obtained, and the first image area includes M pixels;
  • the M pixels are merged to obtain H pixels, M and H are both positive integers, and H is less than M;
  • a second preview image is displayed based on the H pixels.
  • the one or more processors invoke the computer instructions to cause the electronic device to execute:
  • the user's click operation on the first preview image is detected, and the second center point is the touch point between the user and the electronic device.
  • the one or more processors invoke the computer instructions to cause the electronic device to execute:
  • the first subject in the first preview image is detected, and the second center point is the center point of the first subject.
  • the second magnification is determined based on a ratio between the area of the target area and the area of the first preview image.
  • the second magnification is 2 magnification.
  • the second magnification is 3 magnification.
  • the second magnification is 4x magnification.
  • an electronic device including a module/unit for executing the first aspect or any image processing method in the first aspect.
  • a fourth aspect provides an electronic device.
  • the electronic device includes one or more processors and memories; the memory is coupled to the one or more processors, and the memory is used to store computer program codes, and the The computer program code includes computer instructions that are invoked by the one or more processors to cause the electronic device to perform the first aspect or any method in the first aspect.
  • a chip system is provided.
  • the chip system is applied to an electronic device.
  • the chip system includes one or more processors.
  • the processor is used to call computer instructions to cause the electronic device to execute the first aspect. Or any method in the first aspect.
  • a computer-readable storage medium stores computer program code.
  • the electronic device causes the electronic device to execute the first aspect or the first aspect. any of the methods.
  • a computer program product includes: computer program code.
  • the computer program code When the computer program code is run by an electronic device, the electronic device causes the electronic device to execute the first aspect or any of the first aspects. a way.
  • the image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; for example, during the entire zoom shooting process, there is no need to collect the zoom center to always be the center of the sensor, but to change the zoom center from the sensor
  • the center of the camera smoothly transitions to the center of the target area in the shooting scene; allowing users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices, improving the user's shooting experience.
  • the first pixel combining method can be used to read out the image, thereby avoiding a large loss of clarity of the image after zooming and improving the clarity of the image after zooming.
  • Figure 1 is a schematic diagram of a pixel combining method provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of another pixel combining method provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of another pixel combining method provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of another pixel combining method provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of a hardware system suitable for the electronic device of the present application.
  • Figure 6 is a schematic diagram of a software system suitable for the electronic device of the present application.
  • Figure 7 is a schematic diagram of an application scenario suitable for the embodiment of the present application.
  • Figure 8 is a schematic interface diagram of an image processing method provided by an embodiment of the present application.
  • Figure 9 is a schematic interface diagram of an image processing method provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of the smooth transition of the zoom center provided by the embodiment of the present application.
  • Figure 11 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • Figure 12 is a schematic interface diagram of an image processing method provided by an embodiment of the present application.
  • Figure 13 is a schematic interface diagram of an image processing method provided by an embodiment of the present application.
  • Figure 14 is a schematic flow chart of an image processing method provided by an embodiment of the present application.
  • Figure 15 is a schematic diagram of a second preview image overlapping the target area provided by an embodiment of the present application.
  • Figure 16 is a schematic flow chart of an image processing method provided by an embodiment of the present application.
  • Figure 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Figure 18 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the image sensor includes multiple photosensitive elements.
  • the charge collected by each photosensitive element is one pixel, and a binning operation is performed on the pixel information.
  • Binning can merge n ⁇ n pixels into one pixel.
  • Binning can combine adjacent 2 ⁇ 2 pixels into one pixel; that is, the color of adjacent 2 ⁇ 2 pixels is presented in the form of one pixel.
  • the electronic device obtains the image and reads out the image in a Binning manner.
  • (a) in Figure 1 is a 4 ⁇ 4 pixel diagram, and adjacent 2 ⁇ 2 pixels are synthesized into one pixel.
  • (b) in Figure 1 is a schematic diagram of pixels read out by Binning.
  • the 2 ⁇ 2 pixels in the 01 area shown in (a) in Figure 1 can be combined to form the pixel R shown in (b) in Figure 1;
  • the 2 ⁇ 2 pixels in the 02 area shown in (a) are combined to form the pixel G shown in (b) in Figure 1;
  • the 2 ⁇ 2 pixels in the 03 area shown in (a) in Figure 1 pixels are combined to form the pixel G shown in (b) in Figure 1;
  • the 2 ⁇ 2 pixels in the 04 area shown in (a) in Figure 1 are combined to form the pixel G shown in (b) in Figure 1.
  • the Bayer format image refers to an image that only includes red, blue, and green (ie, the three primary colors).
  • the pixel formed by 2 ⁇ 2 pixels in the 01 area is red (R)
  • the pixel formed by the 2 ⁇ 2 pixels in the 02 area is green (G)
  • the pixel formed by the 2 ⁇ 2 pixels in the 03 area is green (G)
  • the pixel formed by 2 ⁇ 2 pixels in the 04 area is blue (B).
  • FIG. 1 may refer to the four-in-one method of Binning, that is, the 2 ⁇ 2 pixels shown in (a) in Figure 1 are synthesized into one pixel shown in (b) in Figure 1;
  • Binning can also include a nine-in-one method, or a sixteen-in-one method, etc.; among them, the nine-in-one method refers to using Binning to combine 3 ⁇ 3 pixels to form 1 pixel; the sixteen-in-one method refers to using Binning Combine 4 ⁇ 4 pixels to form 1 pixel.
  • the electronic device obtains the image and reads out the image in a Binning method (for example, a nine-in-one method).
  • a Binning method for example, a nine-in-one method.
  • (a) in Figure 2 is a 6 ⁇ 6 pixel diagram, which combines adjacent 3 ⁇ 3 pixels into one pixel;
  • (b) in Figure 2 is a pixel diagram read out by Binning.
  • the 3 ⁇ 3 pixels in the 05 area shown in (a) in Figure 2 can be combined to form the pixel R shown in (b) in Figure 2;
  • the 3 ⁇ 3 pixels in the 06 area shown in (a) are combined to form the pixel G shown in (b) in Figure 2;
  • the 3 ⁇ 3 pixels in the 07 area shown in (a) in Figure 2 pixels are combined to form the pixel G shown in (b) in Figure 2;
  • the 3 ⁇ 3 pixels in the 08 area shown in (a) in Figure 2 are combined to form the pixel G shown in (b) in Figure 2 Pixel B shown.
  • the electronic device obtains the image and reads out the image in a Binning manner (for example, a sixteen-in-one manner).
  • a Binning manner for example, a sixteen-in-one manner.
  • (a) in Figure 3 is a 6 ⁇ 6 pixel diagram, which combines adjacent 3 ⁇ 3 pixels into one pixel;
  • (b) in Figure 3 is a pixel diagram read out by Binning.
  • the 4 ⁇ 4 pixels in the 09 area shown in (a) in Figure 3 can be combined to form the pixel R shown in (b) in Figure 3;
  • the 4 ⁇ 4 pixels in the 10 area shown in (a) are combined to form the pixel G shown in (b) in Figure 3;
  • the 4 ⁇ 4 pixels in the 11 area shown in (a) in Figure 3 pixels are combined to form the pixel G shown in (b) in Figure 3;
  • the 4 ⁇ 4 pixels in the 12 areas shown in (a) in Figure 3 are combined to form the pixel G shown in (b) in Figure 3.
  • the above-mentioned Binning method may be called the “second pixel merging method”; the “second pixel merging method” may also be called the “second pixel arrangement method”, “second pixel merging method” Pixel combination mode” or “second image readout mode”, etc.
  • the pixels are rearranged into a Bayer format image. For example, assuming that a pixel in the image is composed of n ⁇ n pixels; then Remosaic can be used to rearrange a pixel in the image into n ⁇ n pixels.
  • FIG. 4 is a schematic diagram of a pixel, and each pixel can be synthesized from adjacent 2 ⁇ 2 pixels.
  • FIG. 4 is an image diagram of the Bayer format read using the Remosaic method. Specifically, in (a) of FIG. 4 , pixel A is red, pixels B and C are green, and pixel D is blue. Divide each pixel in (a) in Figure 4 into 3 ⁇ 3 pixels and rearrange them respectively. That is, the Remosaic method is used to read, and the read image is the Bayer format image shown in (b) in Figure 4.
  • the zoom factor of the electronic device increases during the process of capturing images, the impact on the clarity of the image will be greater.
  • the viewing range displayed by the electronic device will be adjusted to part of the scene being photographed, and the corresponding field of view (FOV) of the camera will also gradually decrease; Decrease, the number of acquired pixels decreases, which reduces the clarity of the image; through Remosaic, one pixel can be rearranged into multiple Bayer format pixels, thereby increasing the number of pixels; after the number of pixels increases, the details of the image Information is increased, which improves the clarity of the image.
  • FOV field of view
  • the above Remosaic method may be called the “first pixel merging method”; the “first pixel merging method” may also be called the “first pixel arrangement method”, “the first pixel merging method” Pixel combination mode” or “first image readout mode”, etc.
  • Figure 5 shows a hardware system suitable for the electronic device of the present application.
  • the electronic device 100 may be a mobile phone, a smart screen, a tablet, a wearable electronic device, a vehicle-mounted electronic device, an augmented reality (AR) device, a virtual reality (VR) device, a notebook computer, or a super mobile personal computer ( Ultra-mobile personal computer (UMPC), netbook, personal digital assistant (personal digital assistant, PDA), projector, etc.
  • the embodiment of the present application does not place any restrictions on the specific type of the electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (SIM) card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure shown in FIG. 5 does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or less components than those shown in FIG. 5 , or the electronic device 100 may include a combination of some of the components shown in FIG. 5 , or , the electronic device 100 may include sub-components of some of the components shown in FIG. 5 .
  • the components shown in Figure 5 may be implemented in hardware, software, or a combination of software and hardware.
  • Processor 110 may include one or more processing units.
  • the processor 110 may include at least one of the following processing units: an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), an image signal processor (image signal processor) , ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, neural network processing unit (NPU).
  • an application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processing unit NPU
  • different processing units can be independent devices or integrated devices.
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • connection relationship between the modules shown in FIG. 5 is only a schematic illustration and does not constitute a limitation on the connection relationship between the modules of the electronic device 100 .
  • each module of the electronic device 100 may also adopt a combination of various connection methods in the above embodiments.
  • the wireless communication function of the electronic device 100 can be implemented through components such as antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, modem processor, and baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the electronic device 100 may implement display functions through a GPU, a display screen 194, and an application processor.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • determining the first center point, the second center point and the target area can be performed in the processor 110; in addition, N center points can be obtained based on the first center and the second center point. ;
  • the zoom center can be smoothly transitioned from the first center to the second center based on N zoom center points, and while the electronic device keeps its position unchanged, tracking zoom can be achieved for the target area in the shooting scene.
  • the relevant steps of determining the target area and the target center point in the image processing method of the present application may be executed in the processor 110 .
  • the display screen 194 may be used to display the first preview image or the second preview image.
  • the electronic device 100 can implement the shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can algorithmically optimize the noise, brightness and color of the image. ISP can also optimize parameters such as exposure and color temperature of the shooting scene.
  • the ISP may be provided in the camera 193.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard red green blue (RGB), YUV and other format image signals.
  • the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the camera 193 may be used to obtain the first preview image or the second preview image.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3 and MPEG4.
  • MPEG moving picture experts group
  • MPEG2 MPEG2
  • MPEG3 MPEG3
  • the gyro sensor 180B may be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x-axis, y-axis, and z-axis
  • the gyro sensor 180B can be used for image stabilization. For example, when the shutter is pressed, the gyro sensor 180B detects the angle at which the electronic device 100 shakes, and calculates the distance that the lens module needs to compensate based on the angle, so that the lens can offset the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used in scenarios such as navigation and somatosensory games.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally the x-axis, y-axis, and z-axis). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. The acceleration sensor 180E can also be used to identify the posture of the electronic device 100 as an input parameter for applications such as horizontal and vertical screen switching and pedometer.
  • Distance sensor 180F is used to measure distance.
  • Electronic device 100 can measure distance via infrared or laser. In some embodiments, such as in a shooting scene, the electronic device 100 may utilize the distance sensor 180F to measure distance to achieve fast focusing.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touching.
  • Fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement functions such as unlocking, accessing application locks, taking photos, and answering incoming calls.
  • Touch sensor 180K also known as touch device.
  • the touch sensor 180K can be disposed on the display screen 194.
  • the touch sensor 180K and the display screen 194 form a touch screen.
  • the touch screen is also called a touch screen.
  • the touch sensor 180K is used to detect a touch operation acted on or near the touch sensor 180K.
  • the touch sensor 180K may pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 and at a different position from the display screen 194 .
  • the hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is introduced below.
  • FIG. 6 is a schematic diagram of a software system of an electronic device provided by an embodiment of the present application.
  • the software system can be divided into four layers, from top to bottom: application layer, application framework layer, Android Runtime and system library, and kernel layer.
  • the application layer can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the image processing of the embodiments of the present application can be applied to camera applications; for example, the image processing method provided by the embodiments of the present application can be used to achieve zoom based on the target area during the zoom shooting process of the camera application; specifically, throughout the During zoom shooting, the zoom center of the electronic device does not always need to be the center of the sensor. Instead, the zoom center smoothly transitions from the center of the sensor to the center of the target area in the shooting scene. This allows the user to focus on shooting without moving the electronic device. Tracking zoom is implemented on the target area in the scene.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer can include some predefined functions.
  • the application framework layer includes the window manager, content provider, view system, phone manager, resource manager, and notification manager.
  • Android Runtime can include core libraries and virtual machines.
  • the Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one is the functional functions that need to be called by the Java language, and the other is the core library of Android.
  • the application layer and application framework layer can run in virtual machines.
  • the virtual machine executes the java files of the application layer and application framework layer into binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules, such as: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (such as: open graphics library for embedded systems, OpenGL ES) and 2D graphics engines (for example: skia graphics library (skia graphics library, SGL)).
  • surface manager surface manager
  • media libraries Media Libraries
  • 3D graphics processing libraries such as: open graphics library for embedded systems, OpenGL ES
  • 2D graphics engines for example: skia graphics library (skia graphics library, SGL)).
  • the kernel layer is the layer between hardware and software.
  • the kernel layer can include driver modules such as display driver, camera driver, audio driver, and sensor driver.
  • zoom capabilities can include optical zoom or digital zoom.
  • Optical zoom refers to zooming in or out by moving the lens in the camera module. Shooting scenery; digital zoom achieves the effect of magnifying distant scenery by increasing the area of each pixel of the image; however, for optical zoom or digital zoom, as the magnification increases during the zoom process, the zoom The center point is always the center of the sensor in the electronic device; that is, the image is cropped according to the center during the entire zoom process.
  • the zoom process it is cropped step by step along the central area of the sensor imaging; the center of the image is always the sensor The center area of imaging; if you need to adjust the zoom center, you need to move the direction of the electronic device so that the electronic device points to the target object in the zoom center for zooming.
  • this zoom processing method when it is necessary to zoom in on the target area in the photographed scene, for example, to zoom in on the user's area of interest, the user needs to move the electronic device to align the electronic device with the target area in the photographed scene, otherwise The zoom operation for the target area cannot be implemented, resulting in poor user experience.
  • embodiments of the present application provide an image processing method that can achieve zoom based on the target area during the zoom shooting process; specifically, during the entire zoom shooting process, the zoom center of the electronic device does not always need to be Instead of moving the center of the sensor, the zoom center smoothly transitions from the center of the sensor to the center of the target area in the shooting scene; allowing users to achieve tracking zoom for the target area in the shooting scene without moving electronic devices, improving the user's shooting experience. experience.
  • Figure 7 is a schematic diagram of an application scenario suitable for this application.
  • the image processing method provided by the embodiment of the present application can be applied to zoom photography of an electronic device; for example, in the process of zoom photography or zoom video recording of an electronic device.
  • FIG. 7 are schematic diagrams of existing zoom processing; when the electronic device photographs the target object, the electronic device displays a display as shown in (a) in Figure 7
  • the zoom factor of the electronic device is "1 ⁇ " (1 times), that is, there is no zoom; after the electronic device detects the zoom operation, for example, after the electronic device detects the 2x zoom operation, it can be displayed as shown in Figure 7
  • the zoom center remains unchanged, that is, the center point of the preview image 210 is the same as the center point of the preview image 220 .
  • FIG. 7 are schematic diagrams of the image processing method provided by the embodiment of the present application; when the electronic device captures the target object, the electronic device displays a display as shown in Figure 7 (c In the preview image 230 shown in ), the zoom factor of the electronic device is "1 ⁇ " (1 times), that is, there is no zoom; the electronic device detects the operation of turning on the directional zoom and starts to execute the image processing method provided by the embodiment of the present application.
  • Tracking zoom based on the target area in the preview image can be implemented; for example, after the electronic device detects the 2x zoom operation, the preview image 240 as shown in (d) of Figure 7 can be displayed; the difference between the preview image 230 and the preview image 240 The center point can be different.
  • Figure 8 is a schematic interface diagram of the image processing method provided by the embodiment of the present application.
  • the user can instruct the electronic device to run the camera application by clicking the icon 302 of the "Camera" application on the desktop 301.
  • the electronic device runs the camera application, the following is displayed: The shooting interface shown in (b) in Figure 8 .
  • the electronic device is in the lock screen state, the user can instruct the electronic device to run the camera application by sliding to the right on the display screen of the electronic device, and the electronic device can display the shooting interface as shown in (b) of Figure 8 .
  • the electronic device is in a locked screen state, and the lock screen interface includes an icon of the camera application.
  • the user instructs the electronic device to open the camera application by clicking on the icon of the camera application, and the electronic device can display (b) in Figure 8 the shooting interface shown.
  • the electronic device when the electronic device is running other applications, the application has the permission to call the camera application; the user can instruct the electronic device to open the camera application by clicking the corresponding control, and then the electronic device can display as shown in (b) in Figure 8 shooting interface.
  • the user can instruct the electronic device to open the camera application by selecting the camera function control; as shown in (b) of Figure 8, the shooting interface can include a viewfinder 303, Shooting controls and function controls; among them, the shooting controls include controls 304, setting controls, etc.; the function controls include: large aperture, portrait, photography and video recording, etc.; after the electronic device detects the click operation on the control 304, it starts to execute the functions provided by this solution.
  • the image processing method is to start executing the intelligent zoom method provided by this solution.
  • the zoom factor indication 305 may also be included in the shooting interface.
  • the default zoom factor of an electronic device is the basic factor, which can be "1 ⁇ ".
  • the zoom factor can be understood as the focal length of the current camera, which is equivalent to the zoom/enlargement factor of the reference focal length; as shown in (c) of Figure 8, the shooting interface 306 can also include a ruler 307, which can be used to indicate the current The zoom ratio of “2 ⁇ ” displays the shooting interface shown in (d) in Figure 8 .
  • the image processing method provided by the embodiment of the present application is exemplified starting with clicking the control 304.
  • the image processing method provided by the embodiment of the present application can also be selected to be opened through the setting options in the setting control shown in (b) in Figure 8 Processing method; Alternatively, other controls can be set in (b) of Figure 8 , and the electronic device detects click operations on other controls to start the image processing method according to the embodiment of the present application.
  • Figure 9 is a schematic interface diagram of an image processing method provided by yet another embodiment of the present application.
  • the user can instruct the electronic device to run the camera application by clicking the icon 402 of the "Camera” application in the desktop 401, and the electronic device runs the camera application, as shown in Figure 9
  • the shooting interface may include a viewfinder 403, shooting controls, and functional controls; the shooting controls include controls 404, setting controls, etc.; the functional controls include: large aperture, portrait, photo taking, and video recording. etc.; after the electronic device detects the click operation on the control 404, it starts to execute the image processing method provided by this solution.
  • the zoom factor indication 405 may also be included in the shooting interface.
  • the default zoom factor of an electronic device is the basic factor, which can be "1 ⁇ ".
  • the zoom factor can be understood as the focal length of the current camera, which is equivalent to the zoom/enlargement factor of the reference focal length; as shown in (c) in Figure 9, the user can make two fingers on the display screen of the electronic device (or, Use a pinching gesture with three fingers to reduce the zoom ratio used by the electronic device; or, the user can make a gesture of sliding outwards with two fingers (or three fingers) on the display screen of the electronic device, that is, in the opposite direction to pinching. Increase the distance between your fingers and increase the zoom ratio used by electronic devices. For example, you can adjust the zoom ratio of the camera application from "1 ⁇ " to "2 ⁇ " through a two-finger sliding gesture on the display screen of the electronic device, and display the shooting as shown in (d) of Figure 9 interface.
  • the ruler shown in Figures 8 and 9 is located at the bottom of the viewfinder frame, and the ruler can also be located on the right side of the viewfinder frame; this application does not impose any restrictions on the specific position of the ruler.
  • the electronic device can turn on smart zoom, and the electronic device automatically adjusts the zoom magnification by identifying the distance between the electronic device and the photographed object; during the zooming process, the electronic device can perform the steps provided by the embodiments of the present application. Image processing methods.
  • Embodiments of the present application provide an image processing method that can be applied to zoom shooting of images; during the zoom shooting process, zoom based on a target area can be achieved, and the target area can refer to the image area that the user is interested in, or the target area. It can refer to the image area that needs to be tracked for zooming; for example, the zoom center of the electronic device does not always need to be the center of the sensor, but the zoom center smoothly transitions from the center of the sensor to the center of the target area; for example, as shown in Figure 10 ( a) shows the existing zoom processing process. The zoom center is always located at point 1 during zoom shooting; (b) in Figure 10 shows a schematic diagram of the zoom center obtained through the image processing method in the embodiment of the present application.
  • the sensor center of the electronic device is point 1
  • the image area 410 is the target area
  • the center point of the target area is point 4
  • the image processing method provided by the embodiment of the present application can realize the zoom center point moving from point 1 to point 2, point 3 and finally the zoom center point moves to point 4, thereby achieving directional zoom based on the target area; therefore,
  • zoom shooting for example, 4 zooms include Zooming from image 1 to image 2, image 3 and image 4
  • the image processing method provided by the embodiment of the present application when the position of the electronic device is the same, tracking zoom can be achieved for the target area in the shooting scene, thereby improving the user's shooting experience.
  • the zoom processing process based on the target area shown in (b) in FIG. 10 will be described in detail below with reference to FIG. 11 .
  • the zoom processing process shown in Figure 11 may include the following steps:
  • Step 1 In the shooting mode of the electronic device at the reference focal length, collect the image 601; in the image 601, determine the area size of the target area 602 in the image 601 and the center point of the target area 602; according to the area size and the center point of the target area 602 The center point of the target area 602 may determine the location of the target area 602 .
  • target area 602 may refer to an image area that the user is interested in, or the target area 602 may refer to an image area that needs to be tracked for zooming.
  • the reference focal length of the electronic device may refer to the zoom magnification of “1 ⁇ ”; where the center of the image 601 is point A; point A may be determined based on the center position of the sensor in the electronic device.
  • the location of the center point of the target area 602 can be obtained based on the following method:
  • the electronic device After the electronic device displays the image 601, it detects the user's click operation on the image 601; in response to the user's click operation, the user's contact point with the image 601 can be used as the target center point; for example, for clicking the image 601 If you select the chin of the portrait in the image, you can use the chin of the portrait as the target center point, which is point D.
  • portrait recognition can be performed on the image to identify the portrait in the image 601; the center point of the portrait is used as the center point of the target area.
  • an image area with a higher priority level in the image 601 may be determined based on the recognition strategy, and the center point of the image area with a higher priority level may be used as the center point of the target area.
  • the recognition strategy may mean that the shooting scene includes Category 1, Category 2 or Category 3, etc.; the priority of Category 1 is higher than the priority of Category 2; the priority of Category 2 is higher than the priority of Category 3; for example , for portrait shooting mode, category 1 can refer to portraits, category 2 can refer to green plants, and category 3 can refer to landscapes (for example, the sky, distant mountains, etc.); for example, for landscape shooting mode, category 1 can refer to landscapes. (For example, the sky, distant mountains, etc.) Category 2 can refer to green plants, and Category 3 can refer to portraits.
  • the recognition strategy can be used to determine the image area where the category with a higher priority in the image 601 is located. This application does not limit the specific content of the recognition strategy.
  • the electronic device can identify the subject in the first preview image; use the center point of the image area where the subject is located as the center point of the target area; for example, the subject can refer to a portrait or a green plant in the first preview image. , scenery or animals; after recognizing the subject in the first preview image, the center point of the image area where the subject is located can be used as the center point of the target area.
  • prompt information may be displayed on the display screen of the electronic device; the center point of the target subject among the at least two subjects selected by the user is used as the center point of the target area.
  • the prompt message "Zoom is centered on the portrait” can be displayed on the display screen of the electronic device and the selection control "Yes” or “No” is displayed; if it is detected that the user clicks the control " If it is detected that the user clicks the control "No", the center of the moon can be used as the center point of the target area.
  • the above implementation methods 1, 2, 3, 4 and 5 are examples of determining the center point of the target area 602; the center point of the target area 602 can also be determined through other methods. The application does not impose any restrictions on this.
  • the center point of the target area 602 is the target zoom center.
  • the zoom center can be smoothly transitioned from the center of the sensor (for example, point A) to the target zoom center (for example, point A).
  • Point D allows the user to achieve tracking zoom for the target area in the shooting scene without moving the electronic device.
  • the area size of the target area 602 in the image 601 may be determined.
  • the electronic device can determine the target area based on the user's instructions, and the area size of the target area can be a preset value.
  • the electronic device detects that the touch point corresponding to the user's click operation on the screen is point D, it determines the target area centered on point D, where the prompt box 610 is used to identify the target area.
  • the user can operate the prompt box 610 to adjust the size and position of the target area. For example, the user can drag the vertex area of the prompt box 610 to expand or reduce the target area. The user can also drag the edge of the prompt box 610 to select different target areas.
  • the electronic device can automatically identify different image areas in the image 601, and obtain an image area with a higher priority in the image 601 based on the recognition strategy, and the target area 602 covers the image area with a higher priority; as shown in Figure 13,
  • the electronic device recognition image 601 includes a user image area 620, a bench image area 630 and a fishing rod image area 640.
  • the user image area 620 has the highest priority, so the target area 602 covers the user image area 620; the area of the target area 602 The size may be determined based on user image area 620.
  • the center point of the image 601 can be determined to be point A, and point A is the center point of the single magnification; through the above implementation methods one to five, the center of the target area 602 can be determined Point D; the target zoom magnification corresponding to point D can be determined based on the area size of the target area 602; for example, assuming that the area size of the target area 602 is A1 and the area size of the image 601 is A2, then point D can be regarded as a zoom The magnification is center point.
  • point A and point D to get the line between point A and point D; divide the line between point A and point D; for example, you can divide point A and point D according to the scale of the zoom ruler in the camera application
  • the connecting lines between them are equally divided; for example, if point A is the center of single zoom magnification, point D is the center of 2x zoom magnification, and the scale of the zoom ruler in the camera application is 0.2, then point A and D can be The points are equally divided into 5 parts, namely 1x zoom magnification, 1.2x zoom magnification, 1.4x zoom magnification, 1.6x zoom magnification, 1.8x zoom magnification and 2x zoom magnification.
  • the cropping area during the zoom process is related to the zoom factor; assuming that the image pixels at a single magnification are N ⁇ M, and it is detected that the current zoom factor of the electronic device is K times zoom, then the area of the cropping area is In order to ensure the integrity of the display of the target area, the cropping area needs to cover the target area; since the cropping area corresponding to 2x zoom magnification is 1/4 of the image with 1x zoom magnification; therefore, the area size of the target area 602 is equal to or When the image is smaller than the image corresponding to 1/4 single zoom magnification, the center point of the target area can be regarded as the center point corresponding to 2x zoom magnification.
  • the center point D of the target area 602 can be the center point of the 2x zoom magnification; assuming that point A and D
  • the number of pixels between points is 100.
  • Point A is the center point corresponding to 1x zoom magnification
  • point D is the center point corresponding to 2x zoom magnification.
  • point A and point D can be equally divided. It is 3 parts, namely between center point A, center point B, center point C and center point D.
  • points A and D can also be divided unequally; for example, the connection between point A and point D can be divided through an interpolation algorithm.
  • points B1 and C1 can be obtained through an interpolation algorithm.
  • the zoom center can be from the center of the sensor (for example, A point) smoothly transitions to the center of the target area (for example, point D).
  • This application does not impose any restrictions on the specific division method of the connection between point A and point D.
  • the field of view (FOV) of the electronic device can be regarded as 100% of the original shooting angle range, as shown in image 601 in Figure 11; when the electronic device When the zoom magnification is adjusted to 1.33x zoom, for example, the zoom center of the electronic device moves 33.3 pixels from point A along the AD connection to the center point B, and the field of view of the electronic device becomes 56.5% of the original shooting angle range.
  • the field of view of the electronic device is used to indicate the maximum angle that the camera of the electronic device can capture when capturing a target object. If the target object is within the maximum angle range that the camera can capture, the light reflected by the target object can be collected by the camera, so that the image of the target object is presented in the electronic device display preview image. If the target object is outside the maximum angle range that the camera can capture, the light reflected by the target object cannot be collected by the camera, so the image of the target object cannot appear in the preview image displayed by the electronic device.
  • the field of view angle can also be called "field of view range", “field of view range”, "field of view area”, etc.
  • Step 2 When the electronic device zooms 1.33 times, the field of view of the electronic device becomes 56.5% of the original shooting angle range; therefore, the image 601 can be cropped; specifically, the point B in the image 601 can be As the center point, crop 56.5% of the field of view range corresponding to image 601 to obtain image 603.
  • Step 3 When the electronic device zooms 1.66 times, the field of view of the electronic device becomes 36.3% of the original shooting angle range; therefore, the image 601 can be cropped; specifically, the point C in the image 601 can be As the center point, crop 36.3% of the field of view range corresponding to image 601 to obtain image 604.
  • Step 4 When the electronic device performs 2x zoom, since the field of view of the electronic device becomes 25% of the original shooting angle range; therefore, the image 601 can be cropped; specifically, the D point in the image 601 can be As the center point, crop 25% of the field of view range corresponding to image 601 to obtain image 605.
  • the cropping area size of the image 605 is 1/4 of the entire image size; if the area size of the target area 602 is equal to 1 of the area size of the image 601 /4, at this time, the image 605 is the same as the target area 602; if the area size of the target area 602 is less than 1/4 of the area size of the image 601, then the image 605 includes the target area 602 and has point D as the center point.
  • FIG. 11 illustrates that the area size of the target area 602 is 1/4 of the area size of the image 601 , then the image 605 and the target area 602 correspond to the same image content.
  • the zoom process is continued on the image 605, that is, when the zoom process is performed on the image 601 with a zoom magnification greater than 2, further cropping can be performed based on the image corresponding to the 2x zoom magnification; for example, the image 605 is Baseline, use point D as the zoom center point for cropping.
  • the field of view of the electronic device is 25% of the field of view corresponding to single zoom, the field of view is smaller at this time, and the number of pixels acquired is smaller; If image 601 is cropped and displayed directly on an electronic device, the definition of the image corresponding to image 605 will be lower; further, in order to improve the clarity of the image after zoom processing, the Remosaic method can be used to perform image processing on the pixels in the image. , obtain image 605; by using the Remosaic method, the number of corresponding pixels in the image 605 can be increased, thereby improving the clarity of the image.
  • Step 5 Display the image corresponding to the target area 602 in the camera application.
  • the image content corresponding to the image 605 can be adjusted to an image suitable for the display specifications of the electronic device for display.
  • the Binning method can be used to read out the image to improve the dynamic range and photosensitivity of the entire image; during the zoom process, if the zoom If the magnification is in the range of single zoom to double zoom, the Binning method can be used to read the image; if the zoom magnification is in the range of double zoom and above, in order to avoid the problem of reduced image clarity, the Remosaic method can be used to read the image. out image.
  • the above example uses the 2 ⁇ 2 Remosaic mode to read out the image; the Remosaic mode to read the image may also be the 3 ⁇ 3 Remosaic mode to read the image, or the 4 ⁇ 4 Remosaic mode to read the image. image.
  • a 3 ⁇ 3 Remosaic method is used to read out the image, that is, the pixels collected by the image sensor are rearranged 3 ⁇ 3, then when the cropping area corresponding to the zoom magnification is less than or equal to 1/9 of the entire image area, The image is read out using the Remosaic method; when the cropping area corresponding to the zoom magnification is larger than 1/9 of the entire image area, the image is read out using the Binning method.
  • the 4 ⁇ 4 Remosaic method is used to read out the image, that is, the pixels collected by the image sensor are rearranged 4 ⁇ 4, then when the cropping area corresponding to the zoom magnification is less than or equal to 1/16 of the entire image area, The image is read out using the Remosaic method; when the cropping area corresponding to the zoom magnification is larger than 1/16 of the entire image area, the image is read out using the Binning method.
  • the image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; for example, during the entire zoom shooting process, there is no need to collect the zoom center to always be the center of the sensor, but to change the zoom center from the sensor
  • the center of the shooting scene smoothly transitions to the center of the target area in the shooting scene; allowing the user to achieve tracking zoom for the target area in the shooting scene without moving the electronic device, improving the user's shooting experience; in addition, when the zoom magnification is large,
  • the image can be read out using the Remosaic method through acquisition, thereby avoiding a large loss of clarity of the image after zooming and improving the clarity of the image after zooming.
  • FIG. 14 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • the method 700 shown in FIG. 14 includes steps S710 to S750, and these steps will be described in detail below.
  • Step S710 Start the camera application in the electronic device.
  • Step S720 Display the first preview image.
  • the zoom magnification corresponding to the first preview image is the first magnification
  • the center point of the first preview image is the first center point
  • the zoom ratio corresponding to the first preview image is the first magnification
  • the first magnification can be single zoom (1 ⁇ ); after the electronic device runs the camera application, the corresponding zoom ratio of the single zoom (1 ⁇ ) can be displayed.
  • Preview image is the first magnification
  • the first magnification can be single zoom (1 ⁇ )
  • the corresponding zoom ratio of the single zoom (1 ⁇ ) can be displayed.
  • the first preview image may be a preview image as shown in (b) of Figure 8; or, the first preview image may be a preview image as shown in (b) of Figure 9; or, the first preview image may be as In the image 601 shown in Figure 11, the first center point may be point A.
  • Step 730 Determine the second center point.
  • the second center point is the center point of the target area, and the first center point does not coincide with the second center point.
  • first center point and the second center point do not coincide with each other may mean that the first center point and the second center point are two points with different positions.
  • the first center point may refer to point A in the image 601; the second center point may refer to the center point D of the target area 602; or, as shown in (b) of Figure 10 , the first center point can be pointing point 1, and the second center point can be pointing point 4.
  • the target zoom point may be determined by referring to the related description of determining the location of the center point of the target area in Implementation Mode 1 to Implementation Mode 5 in FIG. 11 , which will not be described again here.
  • the electronic device can determine the target area based on the user's instructions, and the area size of the target area can be a preset value.
  • the electronic device detects that the touch point corresponding to the user's click operation on the screen is point D, it determines the target area centered on point D, where the prompt box 610 is used to identify the target area.
  • a target shooting object eg, image area 620
  • the target area includes the image area where the target shooting object is located
  • the target shooting object is based on the first preview.
  • the priority of the subjects in the image is determined. It should be understood that the specific description can be seen in Figure 13 and will not be described again here.
  • the electronic device detects the first subject in the first preview image, and the second center point is the center point of the first subject; for example, the first subject may refer to a portrait or a green plant in the first preview image. , scenery or animals; after identifying the first subject in the first preview image, the center point of the image area where the first subject is located can be used as the center point of the target area.
  • Step S740 The first operation is detected.
  • the first operation indicates that the zoom factor of the electronic device is the second zoom factor.
  • the first operation may refer to a sliding operation, as shown in (c) of Figure 8; or, the first operation may refer to a pinching operation, or an outward sliding operation, as shown in (c) of Figure 9 Show.
  • Step S750 In response to the first operation, display the second preview image.
  • the zoom magnification corresponding to the second preview image is the second magnification
  • the center point of the second preview image is the second center point
  • the second preview image coincides with the target area; for example, the second preview image includes the target area; or the second preview image includes a part of the target area.
  • the target area is 760 and the second preview image is 770, where the target area 760 includes the subject;
  • the second preview image 770 including the target area 760 may mean that the second preview image 770 includes the target area 760 and other image areas; that is, the second preview image 770 includes the image area where the subject is located and other image areas, as shown in (a) of Figure 15; or, the second preview image 770 includes the target area 760 and may refer to the second
  • the preview image 770 completely coincides with the target area 760; that is, the second preview image 770 coincides with the image area where the subject is located, as shown in (b) of Figure 15; the second preview image 770 includes a part of the target area 760, which may mean The second preview image 770 includes a part of the image area in the target area 760 , that is, the second preview image 770 includes a part of the image area where the subject is located, as shown in (c) of FIG. 15 .
  • including the target area in the second preview image may mean that the target area is covered in the second preview image; or that the second preview image includes the image content of the target area.
  • the target area 760 shown in Figure 15 may refer to the image area 410 shown in (b) in Figure 10;
  • the second preview image shown in (a) in Figure 15 may refer to the image area 410 shown in (b) in Figure 10 Image 3 shown in (b), image 3 includes the target area 410 and other image areas; wherein, the center point of the second preview image 770 can be the pointing point 3, and the center point of the target area 760 can be the pointing point 4; in Figure 15
  • the second preview image shown in (b) may refer to image 4 as shown in (b) in Figure 10 , and image 4 coincides with the target area 410; that is, the center point of the second preview image 770 and the center point of the target area 760
  • the center points coincide with each other, and the center point is point 4;
  • the center point of the second preview image 770 shown in (c) in Figure 15 coincides with the center point of the target area 760, and the center point may refer to (b) in Figure 10
  • the second preview image may be a preview image as shown in (d) of Figure 8; or, the second preview image may be a preview image as shown in (d) of Figure 9; or, the second preview image may be Image 605 shown in Figure 11.
  • the position of the electronic device is the same.
  • the same position of the electronic device may mean that the electronic device does not undergo deflection, translation, flipping, or other movement.
  • the above image processing method further includes: detecting a second operation, the second operation indicates that the zoom magnification of the electronic device is a third magnification; in response to the second operation, displaying a third preview image, the third preview
  • the center point of the image is the third center point, and the third center point is on the line connecting the first center point and the second center point.
  • the third center point may refer to point B, and the third preview image may refer to image 603; or, the third center point may refer to point C, and the third preview image may refer to image 604.
  • connection between the first center point and the second center point includes N center points, each of the N center points corresponds to at least one zoom magnification, and N is an integer greater than or equal to 2; for example , as shown in Figure 11, the first center point is point A, the second center point is point D, and the N center points can include point A, point B, point C and point D; for detailed description, please refer to the relevant description of Figure 11 , which will not be described again here.
  • N center points include the first center point, the second center point and N-2 center points; among them, N-2 center points are located on the line connecting the first center point and the second center point, N-2
  • Each of the center points may correspond to one zoom factor; the second center point may correspond to at least one zoom factor; for example, the area of the target area (for example, A1) and the area of the first preview image (for example, A2) are The ratio between them is between (1/9, 1/4), then the second center point can be the center point of 2x zoom magnification, or the second center point can be The center point of the zoom magnification.
  • the line connecting the first center point and the target center point can be equally divided to obtain N center points; or, the line connecting the first center point and the target center point can be divided according to an interpolation algorithm. , N center points are obtained; for detailed description, please refer to the relevant description in Figure 11, and will not be repeated here.
  • displaying a second preview image includes:
  • the ratio between the area of the target area and the area of the first preview image is less than or equal to the first preset threshold, use the first pixel merging method to display the second preview image;
  • the first pixel merging method may refer to using the Remosaic method to read out the image, as shown in Figure 4; the second pixel merging method may refer to using the Binning method to read the image, as shown in Figures 1 to 3.
  • the electronic device uses the cropping area corresponding to the second magnification to crop the first preview image, Obtain the first image area, which includes M pixels; rearrange the M pixels to obtain K pixels, M and K are both positive integers, and K is greater than M; display the second preview based on the K pixels image.
  • the first preview image is cropped using the cropping area corresponding to the second magnification, Obtain the first image area, which includes M pixels; merge the M pixels to obtain H pixels, M and H are both positive integers, and H is less than M; display the second preview image based on the H pixels .
  • the first preset threshold is 1/4.
  • the first preset threshold is 1/9.
  • the first preset threshold is 1/16.
  • the second magnification is determined based on a ratio between the area of the target area and the area of the first preview image.
  • the The second magnification is 2x magnification, that is, 2x zoom magnification.
  • the ratio between the area of the target area (for example, image 602 shown in Figure 11) and the area of the first preview image (for example, image 601 shown in Figure 11) is less than or equal to 1/9, then the 2x is 3x magnification, that is, 3x zoom magnification.
  • the 2x is 4x magnification, that is, 4x zoom magnification.
  • the image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; specifically, during the entire zoom shooting process, the zoom center of the electronic device does not always need to be the center of the sensor, but will be The zoom center smoothly transitions from the center of the sensor to the center of the target area in the shooting scene; allowing users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices, improving the user's shooting experience.
  • Figure 16 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • the method 800 shown in Figure 16 includes steps S801 to S808, and these steps will be described in detail below.
  • Step S801 The electronic device runs the camera application.
  • Step S802 The electronic device displays the first preview image and determines the first zoom center.
  • first zoom center may refer to the first center point shown in FIG. 14 .
  • the zoom ratio corresponding to the first preview image is the first magnification
  • the first magnification can be single zoom (1 ⁇ ); after the electronic device runs the camera application, the corresponding zoom ratio of the single zoom (1 ⁇ ) can be displayed.
  • Preview image is the first magnification
  • the first magnification can be single zoom (1 ⁇ )
  • the corresponding zoom ratio of the single zoom (1 ⁇ ) can be displayed.
  • the first preview image may be a preview image as shown in (b) of Figure 8; or, the first preview image may be a preview image as shown in (b) of Figure 9; or, the first preview image may be as Image 601 shown in Figure 11.
  • the first center point may refer to point A in the image 601 ; or, as shown in (b) of FIG. 10 , the first center point may refer to point 1 .
  • Step S803 Determine the target zoom center and target area according to the first preview image.
  • target zoom center may refer to the second center point shown in FIG. 14 .
  • the second center point may refer to point D in the target area 602 ; or, as shown in (b) of FIG. 10 , the second center point may be pointing point 4 .
  • a first operation of the user on the first preview image is detected, and the target zoom center point and the target area are determined according to the first operation.
  • the first operation may refer to an operation for clicking the first preview image
  • the target zoom center point may refer to the user's touch point in the first preview image; as shown in Figure 11, the target zoom center may be Refers to point D in image 601.
  • the target center point is the touch point between the user and the first preview image (for example, point D); based on the target center point and the preset image area
  • the size eg, image area 610) determines the target area.
  • a target shooting object eg, image area 620
  • the target area includes the image area where the target shooting object is located
  • the target shooting object is based on the first preview.
  • the priority of the subjects in the image is determined. It should be understood that the specific description can be seen in Figure 13 and will not be described again here.
  • the target zoom center may refer to the center point of the target area in the shooting scene; the target area may refer to the image area that the user is interested in, or the target area may refer to the image area that needs to be tracked for zooming; Determine the target zoom center
  • the specific method of the point please refer to the relevant descriptions of the first to fifth implementation methods of determining the center point of the target area in image 11, which will not be described again here.
  • the zoom magnification corresponding to the target zoom center point may be determined based on the area size of the target area. See the relevant description in Figure 11, which will not be described again here.
  • Step S804 Obtain N zoom center points according to the first zoom center and the target zoom center.
  • connection between the first zoom center and the target zoom center can be equally divided to obtain N zoom centers; or, the connection between the first zoom center and the target zoom center can also be performed through an interpolation algorithm. Divide and obtain N zoom centers. See the relevant description in Figure 11, which will not be described again here.
  • N is related to the zoom ruler scale in the camera; for example, the first zoom center point corresponds to 1x zoom magnification, and the target zoom center point corresponds to 2x zoom magnification; the scale scale from 1x zoom to 2x zoom in the camera is 5 parts, that is, the process from 1x zoom to 2x zoom is 1 ⁇ 1.2 ⁇ 1.4 ⁇ 1.6 ⁇ 2 ⁇ , then N can be equal to 5.
  • Step S805 Determine whether the current zoom ratio meets the preset condition; if the current zoom ratio meets the preset condition, execute step S806; if the current zoom ratio does not satisfy the preset condition, execute step S807.
  • the preset condition refers to that the current zoom ratio is greater than or equal to 2x zoom ratio; if the current zoom ratio is greater than or equal to 2x zoom magnification, then the current zoom magnification satisfies the preset condition, and step S806 is executed; if the current zoom magnification is less than 2 times the zoom magnification, then the current zoom magnification satisfies the preset condition, and step S807 is executed.
  • Step S806 Sampling the first pixel combining method to generate a second preview image.
  • the first pixel combining method may refer to using the Remosaic method to read out the image.
  • Step S807 Sampling the second pixel combining method to generate a second preview image.
  • the second pixel binning method may refer to using the Binning method to read out the image.
  • Step S808 Display a second preview image, and the zoom factor corresponding to the second preview image is the current zoom factor.
  • the image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; specifically, during the entire zoom shooting process, the zoom center of the electronic device does not always need to be the center of the sensor, but will be The zoom center smoothly transitions from the center of the sensor to the center of the target area in the shooting scene; allowing users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices, improving the user's shooting experience.
  • the first pixel combining method refers to the 3 ⁇ 3 Remosaic method to read the image
  • the target zoom center The corresponding zoom magnification is 3 times the zoom magnification
  • A1 represents the area size of the target area
  • A2 represents the size of the first preview image
  • the preset condition refers to that the current zoom magnification is greater than or equal to 3 times the zoom magnification.
  • the target zoom center The corresponding zoom magnification is 4x zoom magnification; if the area size of the target area is greater than 1/16 of the area size of the first preview image, the zoom magnification corresponding to the target zoom midpoint is Among them, A1 represents the area size of the target area; A2 represents the size of the first preview image, and the preset condition refers to that the current zoom magnification is greater than or equal to 4 times the zoom magnification.
  • the image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; for example, during the entire zoom shooting process, there is no need to collect the zoom center to always be the center of the sensor, but to change the zoom center from the sensor
  • the center of the camera smoothly transitions to the center of the target area in the shooting scene; allowing users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices, improving the user's shooting experience.
  • the first pixel combining method can be used to read out the image, thereby avoiding a large loss of clarity of the zoomed image and improving the clarity of the zoomed image.
  • FIG 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 900 includes a processing module 910 and a display module 920 .
  • the processing module 910 is used to start the camera application in the electronic device; the display module 920 is used to display the first preview image, the zoom magnification corresponding to the first preview image is the first magnification, and the center point of the first preview image is the first Center point; the processing module 910 is also used to determine a second center point in the first preview image, the second center point is the center point of the target area, and the first center point and the second center point do not coincide; the first operation is detected, The first operation indicates that the zoom magnification of the electronic device is the second magnification; the display module 920 is further configured to display a second preview image in response to the first operation, and the center point of the second preview image is the second center point.
  • the second preview image coincides with the target area.
  • the second preview image includes the target area.
  • the second preview image includes a part of the target area.
  • the position of the electronic device is the same.
  • processing module 910 is also used to:
  • a second operation is detected, the second operation indicating that the zoom magnification of the electronic device is a third magnification
  • a third preview image is displayed, the center point of the third preview image is a third center point, and the third center point is between the first center point and the second center point. Online.
  • connection between the first center point and the second center point includes N center points, and each of the N center points corresponds to at least one zoom magnification.
  • N is an integer greater than or equal to 2.
  • processing module 910 is also used to:
  • the line connecting the first center point and the second center point is equally divided to obtain the N center points.
  • processing module 910 is also used to:
  • the line connecting the first center point and the second center point is divided according to an interpolation algorithm to obtain the N center points.
  • processing module 910 is specifically used to:
  • the second preview image is displayed using a second pixel combining method.
  • processing module 910 is specifically used to:
  • a first image area is obtained, and the first image area includes M pixels;
  • the M pixels are rearranged to obtain K pixels, M and K are both positive integers, and K is greater than M;
  • the second preview image is displayed based on the K pixels.
  • processing module 910 is specifically used to:
  • a first image area is obtained, and the first image area includes M pixels;
  • the M pixels are merged to obtain H pixels, M and H are both positive integers, and H is less than M;
  • a second preview image is displayed based on the H pixels.
  • processing module 910 is specifically used to:
  • the user's click operation on the first preview image is detected, and the second center point is the touch point between the user and the electronic device.
  • processing module 910 is specifically used to:
  • the first subject in the first preview image is detected, and the second center point is the center point of the first subject.
  • the second magnification is determined based on the ratio between the area of the target area and the area of the first preview image.
  • the second magnification is 2 times the magnification.
  • the second magnification is 3 times the magnification.
  • the second magnification is 4 times the magnification.
  • module can be implemented in the form of software and/or hardware, and is not specifically limited.
  • a “module” may be a software program, a hardware circuit, or a combination of both that implements the above functions.
  • the hardware circuit may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (such as a shared processor, a dedicated processor, or a group processor) for executing one or more software or firmware programs. etc.) and memory, merged logic circuitry, and/or other suitable components to support the described functionality.
  • ASIC application specific integrated circuit
  • processor such as a shared processor, a dedicated processor, or a group processor for executing one or more software or firmware programs. etc.
  • memory merged logic circuitry, and/or other suitable components to support the described functionality.
  • the units of each example described in the embodiments of the present application can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered beyond the scope of this application.
  • Figure 18 shows a schematic structural diagram of an electronic device provided by this application.
  • the dotted line in Figure 18 indicates that this unit or module is optional; the electronic device 1100 can be used to implement the method described in the above method embodiment.
  • the electronic device 1100 includes one or more processors 1101, and the one or more processors 1101 can support the electronic device 1100 to implement the image processing method in the method embodiment.
  • Processor 1101 may be a general-purpose processor or a special-purpose processor.
  • the processor 1101 may be a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), or a field programmable gate array (field programmable). gate array, FPGA) or other programmable logic devices, such as discrete gates, transistor logic devices, or discrete hardware components.
  • the processor 1101 can be used to control the electronic device 1100, execute software programs, and process data of the software programs.
  • the electronic device 1100 may also include a communication unit 1105 to implement input (reception) and output (transmission) of signals.
  • the electronic device 1100 may be a chip, and the communication unit 1105 may be an input and/or output circuit of the chip, or the communication unit 1105 may be a communication interface of the chip, and the chip may be used as a component of a terminal device or other electronic device. .
  • the electronic device 1100 may be a terminal device, and the communication unit 1105 may be a transceiver of the terminal device, or the communication unit 1105 may be a transceiver circuit of the terminal device.
  • the electronic device 1100 may include one or more memories 1102 on which a program 1104 is stored.
  • the program 1104 may be run by the processor 1101 to generate instructions 1103, so that the processor 1101 performs the image processing described in the above method embodiment according to the instructions 1103. method.
  • data may also be stored in the memory 11002.
  • the processor 1101 can also read the data stored in the memory 1102.
  • the data can be stored at the same storage address as the program 1104, or the data can be stored at a different storage address as the program 1104.
  • the processor 1101 and the memory 1102 can be provided separately or integrated together, for example, integrated on a system on chip (SOC) of the terminal device.
  • SOC system on chip
  • the memory 1102 can be used to store the related programs 1104 of the image processing method provided in the embodiment of the present application
  • the processor 1101 can be used to call the related programs 1104 of the image processing method stored in the memory 1102 when performing image processing.
  • Execute the image processing method of the embodiment of the present application for example, start the camera application in the electronic device; display the first preview image, the zoom magnification corresponding to the first preview image is the first magnification, and the center point of the first preview image is the first Center point; determine the second center point in the first preview image, the second center point is the center point of the target area, the first center point and the second center point do not coincide; the first operation is detected, and the first operation indicates the electronic device
  • the zoom magnification is the second magnification; in response to the first operation, a second preview image is displayed, and the center point of the second preview image is the second center point.
  • This application also provides a computer program product, which when executed by the processor 1101 implements the image processing method of any method embodiment in this application.
  • the computer program product may be stored in the memory 1102, such as a program 1104.
  • the program 1104 is finally converted into an executable object file that can be executed by the processor 1101 through processes such as preprocessing, compilation, assembly and linking.
  • This application also provides a computer-readable storage medium on which a computer program is stored.
  • a computer program When the computer program is executed by a computer, the image processing method described in any method embodiment of this application is implemented.
  • the computer program may be a high-level language program or an executable object program.
  • Memory 1102 may be volatile memory or nonvolatile memory, or memory 1102 may include both volatile memory and nonvolatile memory.
  • non-volatile memory can be read-only memory (ROM), programmable ROM (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically removable memory. Erase electrically programmable read-only memory (EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate SDRAM double data rate SDRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the embodiments of the electronic equipment described above are only illustrative.
  • the division of the modules is only a logical function division.
  • there may be other division methods for example, multiple units or components may be The combination can either be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the size of the sequence numbers of each process does not mean the order of execution.
  • the execution order of each process should be determined by its functions and internal logic, and should not be used in the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code. .

Abstract

The present application relates to the field of image processing. Provided are an image processing method and an electronic device, the image processing method being applied to the electronic device and comprising: starting a camera application in the electronic device; displaying a first preview image, the zoom ratio corresponding to the first preview image being a first ratio, and the center point of the first preview image being a first center point; determining a second center point in the first preview image, the second center point being the center point of a target area, and the first center point not coinciding with the second center point; detecting a first operation, the first operation indicating that the zoom ratio of the electronic device is a second ratio; and in response to the first operation, displaying a second preview image, the center point of the second preview image being the second center point. On the basis of the technical solution of the present application, automatic zooming of a target area in a shot scene can be achieved, so that photographing experience of a user is improved.

Description

图像处理方法与电子设备Image processing methods and electronic equipment
本申请要求于2022年03月29日递交国家知识产权局、申请号为202210318644.X、申请名称为“图像处理方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application submitted to the State Intellectual Property Office on March 29, 2022, with application number 202210318644. Applying.
技术领域Technical field
本申请涉及图像处理领域,具体地,涉及一种图像处理方法与电子设备。The present application relates to the field of image processing, specifically, to an image processing method and electronic equipment.
背景技术Background technique
随着多媒体技术和网络技术的飞速发展和广泛应用,人们在日常生活和生产活动中大量的使用图像信息。为了满足不同场景下的拍照需求,电子设备中的相机模组通常具有变焦能力,变焦能力可以包括光学变焦(optical zoom)或者数码变焦(digital zoom)等;光学变焦是指通过相机模组中的镜片移动来放大或者缩小拍摄景物;数码变焦是通过将图像的每个像素面积增大,从而达到放大远处拍摄景物的效果;对于光学变焦或者数码变焦,通常都是以电子设备中图像传感器的中心为变焦的中心点。With the rapid development and widespread application of multimedia technology and network technology, people use a large amount of image information in daily life and production activities. In order to meet the needs of taking pictures in different scenes, camera modules in electronic devices usually have zoom capabilities. The zoom capabilities can include optical zoom (optical zoom) or digital zoom (digital zoom), etc.; optical zoom refers to the function of zooming through the camera module. The lens moves to enlarge or shrink the scene being photographed; digital zoom achieves the effect of magnifying distant scenes by increasing the area of each pixel of the image; for optical zoom or digital zoom, the image sensor in the electronic device is usually used. The center is the center point of zoom.
但是,当需要对拍摄景物中的目标区域进行放大时,例如,针对用户感兴趣区域进行放大,则需要用户移动电子设备将电子设备对准拍摄景物中的目标区域,从而导致用户体验较差。However, when it is necessary to zoom in on a target area in the photographed scenery, for example, to zoom in on an area of interest to the user, the user is required to move the electronic device and aim the electronic device at the target area in the photographed scenery, resulting in a poor user experience.
因此,在电子设备不移动的情况下,如何对拍摄景物中的目标区域进行自动变焦成为一个亟需解决的问题。Therefore, how to automatically zoom on a target area in a photographed scene without moving the electronic device has become an urgent problem that needs to be solved.
发明内容Contents of the invention
本申请提供了一种图像处理方法与电子设备,能够在电子设备不移动的情况下,实现对拍摄景物中的目标区域进行自动变焦;提高用户拍摄体验。This application provides an image processing method and an electronic device, which can realize automatic zooming of a target area in a photographed scene without the electronic device moving; thereby improving the user's shooting experience.
第一方面,提供了一种图像处理方法,所述图像处理方法应用于电子设备,包括:In a first aspect, an image processing method is provided. The image processing method is applied to electronic devices and includes:
启动所述电子设备中的相机应用程序;Launch the camera application in the electronic device;
显示第一预览图像,所述第一预览图像对应的变焦倍率为第一倍率,所述第一预览图像的中心点为第一中心点;Display a first preview image, the zoom magnification corresponding to the first preview image is the first magnification, and the center point of the first preview image is the first center point;
确定所述第一预览图像中的第二中心点,所述第二中心点为目标区域的中心点,所述第一中心点与所述第二中心点不重合;Determine a second center point in the first preview image, the second center point is the center point of the target area, and the first center point does not coincide with the second center point;
检测到第一操作,所述第一操作指示所述电子设备的变焦倍率为第二倍率;Detecting a first operation indicating that the zoom magnification of the electronic device is a second magnification;
响应于所述第一操作,显示第二预览图像,所述第二预览图像的中心点为所述第二中心点。In response to the first operation, a second preview image is displayed, and the center point of the second preview image is the second center point.
应理解,第一预览图像中的目标区域可以是指第一预览图像中用户感兴趣的图像区域,或者,可以是指第一预览图像中需要跟踪进行变焦的图像区域。It should be understood that the target area in the first preview image may refer to an image area in the first preview image that the user is interested in, or may refer to an image area in the first preview image that needs to be tracked for zooming.
本申请实施例提供的图像处理方法,在变焦拍摄的过程中可以实现基于目标区域的变焦;例如,在整个变焦拍摄过程中,无需采集变焦中心始终为传感器的中心,而是将变焦中心从传感器的中心平滑过渡至拍摄场景中目标区域的中心;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦,提高用户的拍摄体验。The image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; for example, during the entire zoom shooting process, there is no need to collect the zoom center to always be the center of the sensor, but to change the zoom center from the sensor The center of the camera smoothly transitions to the center of the target area in the shooting scene; allowing users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices, improving the user's shooting experience.
结合第一方面,在第一方面的某些实现方式中,所述第二预览图像与所述目标区域重合。With reference to the first aspect, in some implementations of the first aspect, the second preview image coincides with the target area.
应理解,第二预览图像与目标区域重合可以是指第二预览图像与目标区域部分重合,或者,第二预览图像与目标区域完全重合。It should be understood that the second preview image coinciding with the target area may mean that the second preview image partially coincides with the target area, or the second preview image completely coincides with the target area.
结合第一方面,在第一方面的某些实现方式中,所述第二预览图像包括所述目标区域。In conjunction with the first aspect, in some implementations of the first aspect, the second preview image includes the target area.
结合第一方面,在第一方面的某些实现方式中,所述第二预览图像包括所述目标区域的一部分。In conjunction with the first aspect, in some implementations of the first aspect, the second preview image includes a part of the target area.
结合第一方面,在第一方面的某些实现方式中,在显示所述第一预览图像与所述第二预览图像时,所述电子设备所处的位置相同。With reference to the first aspect, in some implementations of the first aspect, when the first preview image and the second preview image are displayed, the position of the electronic device is the same.
需要说明的是,电子设备所处的位置相同可以是指电子设备未发生偏转、平移、翻转等移动。It should be noted that the fact that the electronic devices are in the same position may mean that the electronic devices do not undergo deflection, translation, flipping or other movements.
在本申请的实施例中,在电子设备所处的位置相同的情况下,即电子设备未发生移动的情况下,可以针对第一预览图像中的目标区域实现追踪变焦,提高用户的拍摄体验。In embodiments of the present application, when the position of the electronic device is the same, that is, when the electronic device does not move, tracking zoom can be implemented for the target area in the first preview image, thereby improving the user's shooting experience.
结合第一方面,在第一方面的某些实现方式中,还包括:Combined with the first aspect, some implementations of the first aspect also include:
检测到第二操作,所述第二操作是指所述电子设备的变焦倍率为第三倍率;A second operation is detected, the second operation refers to the zoom magnification of the electronic device being a third magnification;
响应于所述第二操作,显示第三预览图像,所述第三预览图像的中心点为第三中心点,所述第三中心点在所述第一中心点与所述第二中心点的连线上。In response to the second operation, a third preview image is displayed, the center point of the third preview image is a third center point, and the third center point is between the first center point and the second center point. Online.
在本申请的实施例中,在检测到第一操作之前电子设备还可以检测到第二操作,第二操作指示电子设备的变焦倍率为第三倍率,第三倍率大于第一倍率且小于第二倍率;即在电子设备在变焦的过程中,变焦中心可以从第一中心点移动至第三中心点,再移动至第二中心点;从而避免第二预览图像的跳变,实现平滑变焦。In embodiments of the present application, before detecting the first operation, the electronic device may also detect a second operation. The second operation indicates that the zoom magnification of the electronic device is a third magnification, and the third magnification is greater than the first magnification and less than the second magnification. magnification; that is, during the zooming process of the electronic device, the zoom center can move from the first center point to the third center point, and then to the second center point; thereby avoiding the jump of the second preview image and achieving smooth zoom.
结合第一方面,在第一方面的某些实现方式中,所述第一中心点与所述第二中心点的连线上包括N个中心点,所述N个中心点中的每个中心点与至少一个变焦倍率对应,N为大于或者等于2的整数。With reference to the first aspect, in some implementations of the first aspect, the connection between the first center point and the second center point includes N center points, and each of the N center points A point corresponds to at least one zoom factor, and N is an integer greater than or equal to 2.
在一种可能的实现方式中,N个中心点包括第一中心点、第二中心点与N-2个中心点;其中,N-2个中心点位于第一中心点与第二中心点的连线上,N-2个中心点中的每个中心点可以对应一个变焦倍率;第二中心点可以对应至少一个变焦倍率。In a possible implementation, the N center points include a first center point, a second center point and N-2 center points; wherein, the N-2 center points are located between the first center point and the second center point. On the connection line, each of the N-2 center points can correspond to one zoom factor; the second center point can correspond to at least one zoom factor.
在本申请的实施例中,第一中心点与第二中心点之间可以包括N个中心点,每个中心点可以对应一个变焦倍率,从而实现将变焦中心从第一中心点至第二中心点的平滑过渡;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦。In the embodiment of the present application, N center points may be included between the first center point and the second center point, and each center point may correspond to a zoom magnification, thereby achieving the transformation of the zoom center from the first center point to the second center point. The smooth transition of points allows users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices.
结合第一方面,在第一方面的某些实现方式中,还包括:Combined with the first aspect, some implementations of the first aspect also include:
对所述第一中心点与所述第二中心点的连线进行等分,得到所述N个中心点。The line connecting the first center point and the second center point is equally divided to obtain the N center points.
在本申请的实施例中,可以将第一中心点与第二中心点之间的连线进行等分,从而在进行变焦的过程中,可以实现将变焦中心从第一中心点至第二中心点的平滑过渡;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦。In the embodiment of the present application, the connection line between the first center point and the second center point can be equally divided, so that during the zooming process, the zoom center can be moved from the first center point to the second center point. The smooth transition of points allows users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices.
结合第一方面,在第一方面的某些实现方式中,还包括:Combined with the first aspect, some implementations of the first aspect also include:
根据插值算法对所述第一中心点与所述第二中心点的连线进行分割,得到所述N个中心点。The line connecting the first center point and the second center point is divided according to an interpolation algorithm to obtain the N center points.
在本申请的实施例中,可以通过插值算法对第一中心点与第二中心点之间的连线进行分割,从而在进行变焦的过程中,可以实现将变焦中心从第一中心点至第二中心点的平滑过渡;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦。In the embodiment of the present application, the connection line between the first center point and the second center point can be divided through an interpolation algorithm, so that during the zooming process, the zoom center can be moved from the first center point to the second center point. The smooth transition between the two center points allows users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices.
结合第一方面,在第一方面的某些实现方式中,所述响应于所述第一操作,显示第二预览图像,包括:In conjunction with the first aspect, in some implementations of the first aspect, displaying a second preview image in response to the first operation includes:
若所述目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于第一预设阈值,采用第一像素合并方式显示所述第二预览图像;If the ratio between the area of the target area and the area of the first preview image is less than or equal to the first preset threshold, display the second preview image using a first pixel merging method;
若所述目标区域的面积与所述第一预览图像的面积之间的比值大于第一预设阈值,采用第二像素合并方式显示所述第二预览图像。If the ratio between the area of the target area and the area of the first preview image is greater than the first preset threshold, the second preview image is displayed using a second pixel combining method.
在本申请的实施例中,在目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于第一预设阈值时,由于电子设备的视场角较小,因此获取的像素点数量较少;通过第一像素合并方式显示第二预览图像,可以增加图像中对应的像素点数量,从而提升图像的清晰度。In the embodiment of the present application, when the ratio between the area of the target area and the area of the first preview image is less than or equal to the first preset threshold, since the field of view of the electronic device is small, the acquired pixels The number of points is small; displaying the second preview image through the first pixel merging method can increase the number of corresponding pixels in the image, thereby improving the clarity of the image.
应理解,第一像素合并方式可以是指采用Remosaic方式读出图像;第二像素合并发方式可以是指采用Binning方式读出图像。It should be understood that the first pixel binning method may refer to using the Remosaic method to read the image; the second pixel binning method may refer to using the Binning method to read the image.
结合第一方面,在第一方面的某些实现方式中,所述采用第一像素合并方式显示所述第二预览图像,包括:With reference to the first aspect, in some implementations of the first aspect, displaying the second preview image using a first pixel merging method includes:
采用所述第二倍率对应的裁切区域对所述第一预览图像进行裁切处理,得到第一图像区域,所述第一图像区域中包括M个像素;Using the cutting area corresponding to the second magnification to perform cutting processing on the first preview image, a first image area is obtained, and the first image area includes M pixels;
对所述M个像素进行重新排列处理,得到K个像素,M、K均为正整数,K大于M;The M pixels are rearranged to obtain K pixels, M and K are both positive integers, and K is greater than M;
基于所述K个像素显示所述第二预览图像。The second preview image is displayed based on the K pixels.
在本申请的实施例中,在目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于第一预设阈值时,由于电子设备的视场角较小,因此获取的像素点数量较少;通过第一像素合并方式显示第二预览图像,可以增加图像中对应的像素点数量,从而提升图像的清晰度。In the embodiment of the present application, when the ratio between the area of the target area and the area of the first preview image is less than or equal to the first preset threshold, since the field of view of the electronic device is small, the acquired pixels The number of points is small; displaying the second preview image through the first pixel merging method can increase the number of corresponding pixels in the image, thereby improving the clarity of the image.
结合第一方面,在第一方面的某些实现方式中,所述采用第一像素合并方式显示所述第二预览图像,包括:With reference to the first aspect, in some implementations of the first aspect, displaying the second preview image using a first pixel merging method includes:
采用所述第二倍率对应的裁切区域对所述第一预览图像进行裁切处理,得到第一图像区域,所述第一图像区域中包括M个像素;Using the cutting area corresponding to the second magnification to perform cutting processing on the first preview image, a first image area is obtained, and the first image area includes M pixels;
对所述M个像素进行合并处理,得到H个像素,M、H均为正整数,H小于M;The M pixels are merged to obtain H pixels, M and H are both positive integers, and H is less than M;
基于所述H个像素显示第二预览图像。A second preview image is displayed based on the H pixels.
在本申请的实施例中,在目标区域的面积与所述第一预览图像的面积之间的比值大于第一预设阈值时,由于电子设备的视场角较大,因此获取的像素点数量较多;通过第二像素合并方式显示第二预览图像,可以减少电子设备的运算量,提升电子设备的性能。In the embodiment of the present application, when the ratio between the area of the target area and the area of the first preview image is greater than the first preset threshold, since the field of view of the electronic device is large, the number of pixels acquired is More; displaying the second preview image through the second pixel merging method can reduce the calculation load of the electronic device and improve the performance of the electronic device.
结合第一方面,在第一方面的某些实现方式中,所述确定所述第一预览图像中的第二中心点,包括:With reference to the first aspect, in some implementations of the first aspect, determining the second center point in the first preview image includes:
检测到用户对所述第一预览图像的点击操作,所述第二中心点为所述用户与所述电子设备的触碰点。The user's click operation on the first preview image is detected, and the second center point is the touch point between the user and the electronic device.
结合第一方面,在第一方面的某些实现方式中,所述确定所述第一预览图像中的第二中心点,包括:With reference to the first aspect, in some implementations of the first aspect, determining the second center point in the first preview image includes:
检测到所述第一预览图像中的第一主体,所述第二中心点为所述第一主体的中心点。The first subject in the first preview image is detected, and the second center point is the center point of the first subject.
结合第一方面,在第一方面的某些实现方式中,所述第二倍率是基于所述目标区域的面积与所述第一预览图像的面积之间的比值确定的。In conjunction with the first aspect, in some implementations of the first aspect, the second magnification is determined based on a ratio between an area of the target area and an area of the first preview image.
结合第一方面,在第一方面的某些实现方式中,若所述目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于1/4,所述第二倍率为2倍倍率。In connection with the first aspect, in some implementations of the first aspect, if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/4, the second magnification is 2 magnification.
结合第一方面,在第一方面的某些实现方式中,若所述目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于1/9,所述第二倍率为3倍倍率。In connection with the first aspect, in some implementations of the first aspect, if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/9, the second magnification is 3 magnification.
结合第一方面,在第一方面的某些实现方式中,若所述目标区域的面积与所述第一预览图像的面积的之间的比值小于或者等于1/16,所述第二倍率为4倍倍率。In connection with the first aspect, in some implementations of the first aspect, if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/16, the second magnification is 4x magnification.
第二方面,提供了一种电子设备,所述电子设备包括:一个或多个处理器、存储器和显示屏;所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:In a second aspect, an electronic device is provided. The electronic device includes: one or more processors, a memory, and a display screen; the memory is coupled to the one or more processors, and the memory is used to store a computer Program code, the computer program code comprising computer instructions invoked by the one or more processors to cause the electronic device to perform:
启动所述电子设备中的相机应用程序;Launch the camera application in the electronic device;
显示第一预览图像,所述第一预览图像对应的变焦倍率为第一倍率,所述第一预览图像的中心点为第一中心点;Display a first preview image, the zoom magnification corresponding to the first preview image is the first magnification, and the center point of the first preview image is the first center point;
确定所述第一预览图像中的第二中心点,所述第二中心点为目标区域的中心点,所述第一中心点与所述第二中心点不重合;Determine a second center point in the first preview image, the second center point is the center point of the target area, and the first center point does not coincide with the second center point;
检测到第一操作,所述第一操作指示所述电子设备的变焦倍率为第二倍率;Detecting a first operation indicating that the zoom magnification of the electronic device is a second magnification;
响应于所述第一操作,显示第二预览图像,所述第二预览图像的中心点为所述第二中心点。In response to the first operation, a second preview image is displayed, and the center point of the second preview image is the second center point.
结合第二方面,在第二方面的某些实现方式中,所述第二预览图像与所述目标区域重合。Combined with the second aspect, in some implementations of the second aspect, the second preview image coincides with the target area.
结合第二方面,在第二方面的某些实现方式中,所述第二预览图像包括所述目标区域。In conjunction with the second aspect, in some implementations of the second aspect, the second preview image includes the target area.
结合第二方面,在第二方面的某些实现方式中,所述第二预览图像包括所述目标区域的一部分。In conjunction with the second aspect, in some implementations of the second aspect, the second preview image includes a part of the target area.
结合第二方面,在第二方面的某些实现方式中,在显示所述第一预览图像与所述第二预览图像时,所述电子设备所处的位置相同。With reference to the second aspect, in some implementations of the second aspect, when the first preview image and the second preview image are displayed, the position of the electronic device is the same.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:In conjunction with the second aspect, in some implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to execute:
检测到第三操作,所述第三操作是指所述电子设备的变焦倍率为第三倍率;A third operation is detected, the third operation refers to the zoom magnification of the electronic device being a third magnification;
响应于所述第三操作,显示第三预览图像,所述第三预览图像的中心点为第三中心点,所述第三中心点在所述第一中心点与所述第二中心点的连线上。In response to the third operation, a third preview image is displayed, the center point of the third preview image is a third center point, and the third center point is between the first center point and the second center point. Online.
结合第二方面,在第二方面的某些实现方式中,所述第一中心点与所述第二中心点的连线上包括N个中心点,所述N个中心点中的每个中心点与至少一个变焦倍率对应,N为大于或者等于2的整数。Combined with the second aspect, in some implementations of the second aspect, the connection between the first center point and the second center point includes N center points, and each center point in the N center points A point corresponds to at least one zoom factor, and N is an integer greater than or equal to 2.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:In conjunction with the second aspect, in some implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to execute:
对所述第一中心点与所述第二中心点的连线进行等分,得到所述N个中心点。The line connecting the first center point and the second center point is equally divided to obtain the N center points.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:In conjunction with the second aspect, in some implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to execute:
根据插值算法对所述第一中心点与所述第二中心点的连线进行分割,得到所述N个中心点。The line connecting the first center point and the second center point is divided according to an interpolation algorithm to obtain the N center points.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:In conjunction with the second aspect, in some implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to execute:
若所述目标区域的面积和所述第二预览图像的面积之间的比值小于或者等于第一预设阈值,采用第一像素合并方式显示所述第二预览图像;If the ratio between the area of the target area and the area of the second preview image is less than or equal to the first preset threshold, display the second preview image using a first pixel merging method;
若所述目标区域的面积和所述第二预览图像的面积之间的比值大于第一预设阈值,采用第二像素合并方式显示所述第二预览图像。If the ratio between the area of the target area and the area of the second preview image is greater than the first preset threshold, the second preview image is displayed using a second pixel combining method.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:In conjunction with the second aspect, in some implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to execute:
采用所述第二倍率对应的裁切区域对所述第一预览图像进行裁切处理,得到第一图像区域,所述第一图像区域中包括M个像素;Using the cutting area corresponding to the second magnification to perform cutting processing on the first preview image, a first image area is obtained, and the first image area includes M pixels;
对所述M个像素进行重新排列处理,得到K个像素,M、K均为正整数,K大于M;The M pixels are rearranged to obtain K pixels, M and K are both positive integers, and K is greater than M;
基于所述K个像素显示所述第二预览图像。The second preview image is displayed based on the K pixels.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:In conjunction with the second aspect, in some implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to execute:
采用所述第二倍率对应的裁切区域对所述第一预览图像进行裁切处理,得到第一图像区域,所述第一图像区域中包括M个像素;Using the cutting area corresponding to the second magnification to perform cutting processing on the first preview image, a first image area is obtained, and the first image area includes M pixels;
对所述M个像素进行合并处理,得到H个像素,M、H均为正整数,H小于M;The M pixels are merged to obtain H pixels, M and H are both positive integers, and H is less than M;
基于所述H个像素显示第二预览图像。A second preview image is displayed based on the H pixels.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:In conjunction with the second aspect, in some implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to execute:
检测到用户对所述第一预览图像的点击操作,所述第二中心点为所述用户与所述 电子设备的触碰点。The user's click operation on the first preview image is detected, and the second center point is the touch point between the user and the electronic device.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:In conjunction with the second aspect, in some implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to execute:
检测到所述第一预览图像中的第一主体,所述第二中心点为所述第一主体的中心点。The first subject in the first preview image is detected, and the second center point is the center point of the first subject.
结合第二方面,在第二方面的某些实现方式中,所述第二倍率是基于所述目标区域的面积与所述第一预览图像的面积之间的比值确定的。In conjunction with the second aspect, in some implementations of the second aspect, the second magnification is determined based on a ratio between the area of the target area and the area of the first preview image.
结合第二方面,在第二方面的某些实现方式中,若所述目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于1/4,所述第二倍率为2倍倍率。Combined with the second aspect, in some implementations of the second aspect, if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/4, the second magnification is 2 magnification.
结合第二方面,在第二方面的某些实现方式中,若所述目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于1/9,所述第二倍率为3倍倍率。Combined with the second aspect, in some implementations of the second aspect, if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/9, the second magnification is 3 magnification.
结合第二方面,在第二方面的某些实现方式中,若所述目标区域的面积与所述第一预览图像的面积的之间的比值小于或者等于1/16,所述第二倍率为4倍倍率。Combined with the second aspect, in some implementations of the second aspect, if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/16, the second magnification is 4x magnification.
第三方面,提供了一种电子设备,包括用于执行第一方面或者第一方面中任一种图像处理方法的模块/单元。In a third aspect, an electronic device is provided, including a module/unit for executing the first aspect or any image processing method in the first aspect.
第四方面,提供一种电子设备,所述电子设备包括一个或多个处理器、存储器;所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行第一方面或者第一方面中的任一种方法。A fourth aspect provides an electronic device. The electronic device includes one or more processors and memories; the memory is coupled to the one or more processors, and the memory is used to store computer program codes, and the The computer program code includes computer instructions that are invoked by the one or more processors to cause the electronic device to perform the first aspect or any method in the first aspect.
第五方面,提供了一种芯片系统,所述芯片系统应用于电子设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行第一方面或第一方面中的任一种方法。In a fifth aspect, a chip system is provided. The chip system is applied to an electronic device. The chip system includes one or more processors. The processor is used to call computer instructions to cause the electronic device to execute the first aspect. Or any method in the first aspect.
第六方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序代码,当所述计算机程序代码被电子设备运行时,使得该电子设备执行第一方面或第一方面中的任一种方法。In a sixth aspect, a computer-readable storage medium is provided. The computer-readable storage medium stores computer program code. When the computer program code is run by an electronic device, the electronic device causes the electronic device to execute the first aspect or the first aspect. any of the methods.
第七方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码被电子设备运行时,使得该电子设备执行第一方面或第一面中的任一种方法。In a seventh aspect, a computer program product is provided. The computer program product includes: computer program code. When the computer program code is run by an electronic device, the electronic device causes the electronic device to execute the first aspect or any of the first aspects. a way.
本申请实施例提供的图像处理方法,在变焦拍摄的过程中可以实现基于目标区域的变焦;例如,在整个变焦拍摄过程中,无需采集变焦中心始终为传感器的中心,而是将变焦中心从传感器的中心平滑过渡至拍摄场景中目标区域的中心;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦,提高用户的拍摄体验。此外,在电子设备的视场角较小时,可以采用第一像素合并方式读出图像,从而避免变焦后图像的清晰度损失较大;提升变焦处理后图像的清晰度。The image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; for example, during the entire zoom shooting process, there is no need to collect the zoom center to always be the center of the sensor, but to change the zoom center from the sensor The center of the camera smoothly transitions to the center of the target area in the shooting scene; allowing users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices, improving the user's shooting experience. In addition, when the field of view of the electronic device is small, the first pixel combining method can be used to read out the image, thereby avoiding a large loss of clarity of the image after zooming and improving the clarity of the image after zooming.
附图说明Description of drawings
图1是本申请实施例提供的一种像素合并方式的示意图;Figure 1 is a schematic diagram of a pixel combining method provided by an embodiment of the present application;
图2是本申请实施例提供的另一种像素合并方式的示意图;Figure 2 is a schematic diagram of another pixel combining method provided by an embodiment of the present application;
图3是本申请实施例提供的另一种像素合并方式的示意图;Figure 3 is a schematic diagram of another pixel combining method provided by an embodiment of the present application;
图4是本申请实施例提供的另一种像素合并方式的示意图;Figure 4 is a schematic diagram of another pixel combining method provided by an embodiment of the present application;
图5是一种适用于本申请的电子设备的硬件系统的示意图;Figure 5 is a schematic diagram of a hardware system suitable for the electronic device of the present application;
图6是一种适用于本申请的电子设备的软件系统的示意图;Figure 6 is a schematic diagram of a software system suitable for the electronic device of the present application;
图7是一种适用于本申请实施例的应用场景的示意图;Figure 7 is a schematic diagram of an application scenario suitable for the embodiment of the present application;
图8是本申请实施例提供的一种图像处理方法的界面示意图;Figure 8 is a schematic interface diagram of an image processing method provided by an embodiment of the present application;
图9是本申请实施例提供的一种图像处理方法的界面示意图;Figure 9 is a schematic interface diagram of an image processing method provided by an embodiment of the present application;
图10是本申请实施例提供的变焦中心平滑过渡的示意图;Figure 10 is a schematic diagram of the smooth transition of the zoom center provided by the embodiment of the present application;
图11是本申请实施例提供的一种图像处理方法的示意图;Figure 11 is a schematic diagram of an image processing method provided by an embodiment of the present application;
图12是本申请实施例提供的一种图像处理方法的界面示意图;Figure 12 is a schematic interface diagram of an image processing method provided by an embodiment of the present application;
图13是本申请实施例提供的一种图像处理方法的界面示意图;Figure 13 is a schematic interface diagram of an image processing method provided by an embodiment of the present application;
图14是本申请实施例提供的一种图像处理方法的示意性流程图;Figure 14 is a schematic flow chart of an image processing method provided by an embodiment of the present application;
图15是本申请实施例提供的一种第二预览图像与目标区域重合的示意图;Figure 15 is a schematic diagram of a second preview image overlapping the target area provided by an embodiment of the present application;
图16是本申请实施例提供的一种图像处理方法的示意性流程图;Figure 16 is a schematic flow chart of an image processing method provided by an embodiment of the present application;
图17是本申请实施例提供的一种电子设备的结构示意图;Figure 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application;
图18是本申请实施例提供的一种电子设备的结构示意图。Figure 18 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
在本申请的实施例中,以下术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。In the embodiments of the present application, the following terms “first”, “second”, etc. are only used for descriptive purposes and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, features defined as "first" and "second" may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, "plurality" means two or more.
为了便于对本申请实施例的理解,首先对本申请实施例中涉及的相关概念进行简要说明。In order to facilitate understanding of the embodiments of the present application, the relevant concepts involved in the embodiments of the present application are briefly described first.
1、像素合并(Binning)1. Pixel merging (Binning)
电子设备在拍摄图像的过程中,目标对象反射的光线被摄像头采集,以使得该反射的光线传输至图像传感器。图像传感器上包括多个感光元件,每个感光元件采集到的电荷为一个像素,并对像素信息执行合并读出(Binning)操作。具体地说,Binning可以将n×n个像素合并为一个像素。例如,Binning可以将相邻的2×2个像素合成为一个像素;也就是说,相邻2×2个像素的颜色以一个像素的形式呈现。When the electronic device captures an image, the light reflected by the target object is collected by the camera, so that the reflected light is transmitted to the image sensor. The image sensor includes multiple photosensitive elements. The charge collected by each photosensitive element is one pixel, and a binning operation is performed on the pixel information. Specifically, Binning can merge n×n pixels into one pixel. For example, Binning can combine adjacent 2×2 pixels into one pixel; that is, the color of adjacent 2×2 pixels is presented in the form of one pixel.
示例性的,如图1中的(a)所示,电子设备获取图像后以Binning方式读出图像过程示意图。其中,如图1中的(a)为一个4×4的像素示意图,将相邻的2×2个像素合成为一个像素。图1中的(b)为Binning方式读出的像素示意图。例如,采用Binning方式,可以将图示1中的(a)所示的01区域中的2×2个像素,合并形成图1中的(b)所示的像素R;将图示1中的(a)所示的02区域中的2×2个像素,合并形成图1中的(b)所示的像素G;将图示1中的(a)所示的03区域中的2×2个像素,合并形成图1中的(b)所示的像素G;将图示1中的(a)所示的04区域中的2×2个像素,合并形成图1中的(b)所示的像素B。For example, as shown in (a) in Figure 1 , the electronic device obtains the image and reads out the image in a Binning manner. Among them, (a) in Figure 1 is a 4×4 pixel diagram, and adjacent 2×2 pixels are synthesized into one pixel. (b) in Figure 1 is a schematic diagram of pixels read out by Binning. For example, using the Binning method, the 2×2 pixels in the 01 area shown in (a) in Figure 1 can be combined to form the pixel R shown in (b) in Figure 1; The 2×2 pixels in the 02 area shown in (a) are combined to form the pixel G shown in (b) in Figure 1; the 2×2 pixels in the 03 area shown in (a) in Figure 1 pixels are combined to form the pixel G shown in (b) in Figure 1; the 2×2 pixels in the 04 area shown in (a) in Figure 1 are combined to form the pixel G shown in (b) in Figure 1. Pixel B shown.
其中,以输出图像格式是拜耳(Bayer)格式图像为例,Bayer格式的图像是指图像中仅包括红色、蓝色和绿色(即三原色)的图像。如,01区域中的2×2个像素形成的像素为红色(R),02区域中的2×2个像素形成的像素为绿色(G),03区域中的2× 2个像素形成的像素为绿色(G),04区域中的2×2个像素形成的像素为蓝色(B)。Among them, taking the output image format as a Bayer format image as an example, the Bayer format image refers to an image that only includes red, blue, and green (ie, the three primary colors). For example, the pixel formed by 2×2 pixels in the 01 area is red (R), the pixel formed by the 2×2 pixels in the 02 area is green (G), and the pixel formed by the 2×2 pixels in the 03 area is green (G), and the pixel formed by 2×2 pixels in the 04 area is blue (B).
应理解,图1所示的可以是指采用Binning的四合一方式,即将图1中的(a)所示的2×2个像素合成为图1中的(b)所示的一个像素;此外,Binning还可以包括九合一方式,或者十六合一方式等;其中,九合一方式是指采用Binning将3×3个像素合并形成1个像素;十六合一方式是指采用Binning将4×4个像素合并形成1个像素。It should be understood that what is shown in Figure 1 may refer to the four-in-one method of Binning, that is, the 2×2 pixels shown in (a) in Figure 1 are synthesized into one pixel shown in (b) in Figure 1; In addition, Binning can also include a nine-in-one method, or a sixteen-in-one method, etc.; among them, the nine-in-one method refers to using Binning to combine 3×3 pixels to form 1 pixel; the sixteen-in-one method refers to using Binning Combine 4×4 pixels to form 1 pixel.
示例性地,如图2中的(a)所示,电子设备获取图像后以Binning方式(例如,九合一的方式)读出图像过程示意图。其中,如图2中的(a)为一个6×6的像素示意图,将相邻的3×3个像素合成为一个像素;图2中的(b)为Binning方式读出的像素示意图。例如,采用Binning方式,可以将图示2中的(a)所示的05区域中的3×3个像素,合并形成图2中的(b)所示的像素R;将图示2中的(a)所示的06区域中的3×3个像素,合并形成图2中的(b)所示的像素G;将图示2中的(a)所示的07区域中的3×3个像素,合并形成图2中的(b)所示的像素G;将图示2中的(a)所示的08区域中的3×3个像素,合并形成图2中的(b)所示的像素B。For example, as shown in (a) of Figure 2 , the electronic device obtains the image and reads out the image in a Binning method (for example, a nine-in-one method). Among them, (a) in Figure 2 is a 6×6 pixel diagram, which combines adjacent 3×3 pixels into one pixel; (b) in Figure 2 is a pixel diagram read out by Binning. For example, using the Binning method, the 3×3 pixels in the 05 area shown in (a) in Figure 2 can be combined to form the pixel R shown in (b) in Figure 2; The 3×3 pixels in the 06 area shown in (a) are combined to form the pixel G shown in (b) in Figure 2; the 3×3 pixels in the 07 area shown in (a) in Figure 2 pixels are combined to form the pixel G shown in (b) in Figure 2; the 3×3 pixels in the 08 area shown in (a) in Figure 2 are combined to form the pixel G shown in (b) in Figure 2 Pixel B shown.
示例性地,如图3中的(a)所示,电子设备获取图像后以Binning方式(例如,十六合一的方式)读出图像过程示意图。其中,如图3中的(a)为一个6×6的像素示意图,将相邻的3×3个像素合成为一个像素;图3中的(b)为Binning方式读出的像素示意图。例如,采用Binning方式,可以将图示3中的(a)所示的09区域中的4×4个像素,合并形成图3中的(b)所示的像素R;将图示3中的(a)所示的10区域中的4×4个像素,合并形成图3中的(b)所示的像素G;将图示3中的(a)所示的11区域中的4×4个像素,合并形成图3中的(b)所示的像素G;将图示3中的(a)所示的12区域中的4×4个像素,合并形成图3中的(b)所示的像素B。For example, as shown in (a) of Figure 3 , the electronic device obtains the image and reads out the image in a Binning manner (for example, a sixteen-in-one manner). Among them, (a) in Figure 3 is a 6×6 pixel diagram, which combines adjacent 3×3 pixels into one pixel; (b) in Figure 3 is a pixel diagram read out by Binning. For example, using the Binning method, the 4×4 pixels in the 09 area shown in (a) in Figure 3 can be combined to form the pixel R shown in (b) in Figure 3; The 4×4 pixels in the 10 area shown in (a) are combined to form the pixel G shown in (b) in Figure 3; the 4×4 pixels in the 11 area shown in (a) in Figure 3 pixels are combined to form the pixel G shown in (b) in Figure 3; the 4×4 pixels in the 12 areas shown in (a) in Figure 3 are combined to form the pixel G shown in (b) in Figure 3. Pixel B shown.
在本申请的实施例中,为了便于理解,将上述Binning方式可以称为“第二像素合并方式”;“第二像素合并方式”也可以被称为“第二像素排列方式”、“第二像素组合方式”或者“第二图像读出模式”等。In the embodiments of the present application, for ease of understanding, the above-mentioned Binning method may be called the "second pixel merging method"; the "second pixel merging method" may also be called the "second pixel arrangement method", "second pixel merging method" Pixel combination mode" or "second image readout mode", etc.
2、像素重排(Remosaic)2. Pixel rearrangement (Remosaic)
采用Remosaic方式读出图像时,将像素重新排列为Bayer格式图像。如,假设在图像中一个像素是由n×n个像素组成的;则采用Remosaic可以将该图像中的一个像素重新排列n×n个像素。When the image is read out using the Remosaic method, the pixels are rearranged into a Bayer format image. For example, assuming that a pixel in the image is composed of n×n pixels; then Remosaic can be used to rearrange a pixel in the image into n×n pixels.
示例性的,图4中的(a)为一种像素的示意图,每个像素是可以由相邻的2×2个像素合成的。图4中的(b)为采用Remosaic方式读出的Bayer格式的图像示意。具体地说,图4中的(a)中像素A为红色,像素B和像素C为绿色,像素D为蓝色。将图4中的(a)中的每个像素分为3×3个像素,并分别重新排列。即采用Remosaic方式读出,读出的图像为图4中的(b)所示的Bayer格式的图像。For example, (a) in Figure 4 is a schematic diagram of a pixel, and each pixel can be synthesized from adjacent 2×2 pixels. (b) in Figure 4 is an image diagram of the Bayer format read using the Remosaic method. Specifically, in (a) of FIG. 4 , pixel A is red, pixels B and C are green, and pixel D is blue. Divide each pixel in (a) in Figure 4 into 3×3 pixels and rearrange them respectively. That is, the Remosaic method is used to read, and the read image is the Bayer format image shown in (b) in Figure 4.
应理解,当电子设备在拍摄图像过程中,变焦倍数增大时,对图像的清晰度影响越大。例如,随着变焦倍数的增大,电子设备显示的取景范围会调整为被摄场景的一部分,摄像头对应的视场角(field of view,FOV)也会逐渐减小;摄像头对应的视场角减小,获取的像素数量减少,使得图像的清晰度降低;通过Remosaic的方式,可以将一个像素重新排列为多个Bayer格式的像素,从而可以增加像素的数量;像素数量增加后,图像的细节信息增加,从而可以提升图像的清晰度。It should be understood that when the zoom factor of the electronic device increases during the process of capturing images, the impact on the clarity of the image will be greater. For example, as the zoom factor increases, the viewing range displayed by the electronic device will be adjusted to part of the scene being photographed, and the corresponding field of view (FOV) of the camera will also gradually decrease; Decrease, the number of acquired pixels decreases, which reduces the clarity of the image; through Remosaic, one pixel can be rearranged into multiple Bayer format pixels, thereby increasing the number of pixels; after the number of pixels increases, the details of the image Information is increased, which improves the clarity of the image.
在本申请的实施例中,为了便于理解,将上述Remosaic方式可以称为“第一像素合并方式”;“第一像素合并方式”也可以被称为“第一像素排列方式”、“第一像素组合方式”或者“第一图像读出模式”等。In the embodiments of the present application, for ease of understanding, the above Remosaic method may be called the "first pixel merging method"; the "first pixel merging method" may also be called the "first pixel arrangement method", "the first pixel merging method" Pixel combination mode" or "first image readout mode", etc.
下面将结合附图,对本申请实施例中图像处理方法与电子设备进行描述。The image processing method and electronic device in the embodiments of the present application will be described below with reference to the accompanying drawings.
图5示出了一种适用于本申请的电子设备的硬件系统。Figure 5 shows a hardware system suitable for the electronic device of the present application.
电子设备100可以是手机、智慧屏、平板电脑、可穿戴电子设备、车载电子设备、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、投影仪等等,本申请实施例对电子设备100的具体类型不作任何限制。The electronic device 100 may be a mobile phone, a smart screen, a tablet, a wearable electronic device, a vehicle-mounted electronic device, an augmented reality (AR) device, a virtual reality (VR) device, a notebook computer, or a super mobile personal computer ( Ultra-mobile personal computer (UMPC), netbook, personal digital assistant (personal digital assistant, PDA), projector, etc. The embodiment of the present application does not place any restrictions on the specific type of the electronic device 100.
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
需要说明的是,图5所示的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图5所示的部件更多或更少的部件,或者,电子设备100可以包括图5所示的部件中某些部件的组合,或者,电子设备100可以包括图5所示的部件中某些部件的子部件。图5示的部件可以以硬件、软件、或软件和硬件的组合实现。It should be noted that the structure shown in FIG. 5 does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, the electronic device 100 may include more or less components than those shown in FIG. 5 , or the electronic device 100 may include a combination of some of the components shown in FIG. 5 , or , the electronic device 100 may include sub-components of some of the components shown in FIG. 5 . The components shown in Figure 5 may be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元。例如,处理器110可以包括以下处理单元中的至少一个:应用处理器(application processor,AP)、调制解调处理器、图形处理器(graphics processing unit,GPU)、图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)、基带处理器、神经网络处理器(neural-network processing unit,NPU)。其中,不同的处理单元可以是独立的器件,也可以是集成的器件。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。 Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), an image signal processor (image signal processor) , ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, neural network processing unit (NPU). Among them, different processing units can be independent devices or integrated devices. The controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。The processor 110 may also be provided with a memory for storing instructions and data. In some embodiments, the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
图5所示的各模块间的连接关系只是示意性说明,并不构成对电子设备100的各模块间的连接关系的限定。可选地,电子设备100的各模块也可以采用上述实施例中多种连接方式的组合。The connection relationship between the modules shown in FIG. 5 is only a schematic illustration and does not constitute a limitation on the connection relationship between the modules of the electronic device 100 . Optionally, each module of the electronic device 100 may also adopt a combination of various connection methods in the above embodiments.
电子设备100的无线通信功能可以通过天线1、天线2、移动通信模块150、无线 通信模块160、调制解调处理器以及基带处理器等器件实现。The wireless communication function of the electronic device 100 can be implemented through components such as antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, modem processor, and baseband processor.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
电子设备100可以通过GPU、显示屏194以及应用处理器实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 may implement display functions through a GPU, a display screen 194, and an application processor. The GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
示例性地,在本申请的实施例中,可以在处理器110中执行确定第一中心点、第二中心点以及目标区域;此外,可以基于第一中心与第二中心点得到N个中心点;在变焦拍摄过程中,可以基于N个变焦中心点将变焦中心从第一中心平滑过渡至第二中心,在电子设备保持位置不变的情况下,实现拍摄场景中针对目标区域实现追踪变焦。For example, in the embodiment of the present application, determining the first center point, the second center point and the target area can be performed in the processor 110; in addition, N center points can be obtained based on the first center and the second center point. ;During the zoom shooting process, the zoom center can be smoothly transitioned from the first center to the second center based on N zoom center points, and while the electronic device keeps its position unchanged, tracking zoom can be achieved for the target area in the shooting scene.
示例性地,在本申请的图像处理方法中确定目标区域以及目标中心点的相关步骤可以是在处理器110中执行的。For example, the relevant steps of determining the target area and the target center point in the image processing method of the present application may be executed in the processor 110 .
示例性地,显示屏194可以用于显示第一预览图像或者第二预览图像。For example, the display screen 194 may be used to display the first preview image or the second preview image.
电子设备100可以通过ISP、摄像头193、视频编解码器、GPU、显示屏194以及应用处理器等实现拍摄功能。The electronic device 100 can implement the shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP可以对图像的噪点、亮度和色彩进行算法优化,ISP还可以优化拍摄场景的曝光和色温等参数。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can algorithmically optimize the noise, brightness and color of the image. ISP can also optimize parameters such as exposure and color temperature of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的红绿蓝(red green blue,RGB),YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。Camera 193 is used to capture still images or video. The object passes through the lens to produce an optical image that is projected onto the photosensitive element. The photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal. ISP outputs digital image signals to DSP for processing. DSP converts digital image signals into standard red green blue (RGB), YUV and other format image signals. In some embodiments, the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
示例性地,在本申请的实施例中,摄像头193可以用于获取第一预览图像或者第二预览图像。For example, in the embodiment of the present application, the camera 193 may be used to obtain the first preview image or the second preview image.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1、MPEG2、MPEG3和MPEG4。Video codecs are used to compress or decompress digital video. Electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3 and MPEG4.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中, 可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x轴、y轴和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。例如,当快门被按下时,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航和体感游戏等场景。The gyro sensor 180B may be used to determine the motion posture of the electronic device 100 . In some embodiments, the angular velocity of electronic device 100 about three axes (ie, x-axis, y-axis, and z-axis) may be determined by gyro sensor 180B. The gyro sensor 180B can be used for image stabilization. For example, when the shutter is pressed, the gyro sensor 180B detects the angle at which the electronic device 100 shakes, and calculates the distance that the lens module needs to compensate based on the angle, so that the lens can offset the shake of the electronic device 100 through reverse movement to achieve anti-shake. The gyro sensor 180B can also be used in scenarios such as navigation and somatosensory games.
加速度传感器180E可检测电子设备100在各个方向上(一般为x轴、y轴和z轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。加速度传感器180E还可以用于识别电子设备100的姿态,作为横竖屏切换和计步器等应用程序的输入参数。The acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally the x-axis, y-axis, and z-axis). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. The acceleration sensor 180E can also be used to identify the posture of the electronic device 100 as an input parameter for applications such as horizontal and vertical screen switching and pedometer.
距离传感器180F用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,例如在拍摄场景中,电子设备100可以利用距离传感器180F测距以实现快速对焦。Distance sensor 180F is used to measure distance. Electronic device 100 can measure distance via infrared or laser. In some embodiments, such as in a shooting scene, the electronic device 100 may utilize the distance sensor 180F to measure distance to achieve fast focusing.
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。The ambient light sensor 180L is used to sense ambient light brightness. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touching.
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现解锁、访问应用锁、拍照和接听来电等功能。Fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to implement functions such as unlocking, accessing application locks, taking photos, and answering incoming calls.
触摸传感器180K,也称为触控器件。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,触摸屏也称为触控屏。触摸传感器180K用于检测作用于其上或其附近的触摸操作。触摸传感器180K可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,并且与显示屏194设置于不同的位置。Touch sensor 180K, also known as touch device. The touch sensor 180K can be disposed on the display screen 194. The touch sensor 180K and the display screen 194 form a touch screen. The touch screen is also called a touch screen. The touch sensor 180K is used to detect a touch operation acted on or near the touch sensor 180K. The touch sensor 180K may pass the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through display screen 194 . In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 and at a different position from the display screen 194 .
上文详细描述了电子设备100的硬件系统,下面介绍电子设备100的软件系统。The hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is introduced below.
图6是本申请实施例提供的电子设备的软件系统的示意图。FIG. 6 is a schematic diagram of a software system of an electronic device provided by an embodiment of the present application.
如图6所示,软件系统可以分为四层,从上至下分别为应用程序层、应用程序框架层、安卓运行时(Android Runtime)和系统库、以及内核层。As shown in Figure 6, the software system can be divided into four layers, from top to bottom: application layer, application framework layer, Android Runtime and system library, and kernel layer.
应用程序层可以包括相机、图库、日历、通话、地图、导航、WLAN、蓝牙、音乐、视频、短信息等应用程序。The application layer can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
本申请实施例的图像处理可以应用于相机应用程序;例如,可以通过本申请实施例提供的图像处理方法,在相机应用程序的变焦拍摄过程中,实现基于目标区域的变焦;具体地,在整个变焦拍摄过程中,电子设备的变焦中心无需始终为传感器的中心,而是将变焦中心从传感器的中心平滑过渡至拍摄场景中目标区域的中心;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦。The image processing of the embodiments of the present application can be applied to camera applications; for example, the image processing method provided by the embodiments of the present application can be used to achieve zoom based on the target area during the zoom shooting process of the camera application; specifically, throughout the During zoom shooting, the zoom center of the electronic device does not always need to be the center of the sensor. Instead, the zoom center smoothly transitions from the center of the sensor to the center of the target area in the shooting scene. This allows the user to focus on shooting without moving the electronic device. Tracking zoom is implemented on the target area in the scene.
应用程序框架层为应用程序层的应用程序提供应用程序编程接口(application programming interface,API)和编程框架。应用程序框架层可以包括一些预定义的函数。The application framework layer provides an application programming interface (API) and programming framework for applications in the application layer. The application framework layer can include some predefined functions.
例如,应用程序框架层包括窗口管理器、内容提供器、视图系统、电话管理器、资源管理器和通知管理器。For example, the application framework layer includes the window manager, content provider, view system, phone manager, resource manager, and notification manager.
安卓运行时(Android Runtime)可以包括核心库和虚拟机。安卓运行时负责安卓系统的调度和管理。Android Runtime can include core libraries and virtual machines. The Android runtime is responsible for the scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library contains two parts: one is the functional functions that need to be called by the Java language, and the other is the core library of Android.
应用程序层和应用程序框架层可以运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理、堆栈管理、线程管理、安全和异常的管理、以及垃圾回收等功能。The application layer and application framework layer can run in virtual machines. The virtual machine executes the java files of the application layer and application framework layer into binary files. The virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
系统库可以包括多个功能模块,例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:针对嵌入式系统的开放图形库(open graphics library for embedded systems,OpenGL ES)和2D图形引擎(例如:skia图形库(skia graphics library,SGL))。The system library can include multiple functional modules, such as: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (such as: open graphics library for embedded systems, OpenGL ES) and 2D graphics engines (for example: skia graphics library (skia graphics library, SGL)).
内核层是硬件和软件之间的层。内核层可以包括显示驱动、摄像头驱动、音频驱动和传感器驱动等驱动模块。The kernel layer is the layer between hardware and software. The kernel layer can include driver modules such as display driver, camera driver, audio driver, and sensor driver.
目前,为了满足不同场景下的拍照需求,电子设备中的相机模组通常具有变焦能力,变焦能力可以包括光学变焦或者数码变焦等;光学变焦是指通过相机模组中的镜片移动来放大或者缩小拍摄景物;数码变焦是通过将图像的每个像素面积增大,从而达到放大远处拍摄景物的效果;但是,对于光学变焦或者数码变焦,在变焦的过程中随着放大倍数的增大,变焦的中心点始终是在电子设备中传感器的中心;即在整个变焦过程对图像按照中心进行裁切,变焦过程中是沿着传感器成像的中心区域进行逐级裁切的;图像的中心始终为传感器成像的中心区域;若需要调整变焦中心,则需要移动电子设备的指向,使得电子设备指向变焦中心的目标对象进行变焦。在这种变焦处理方式下,当需要对拍摄景物中的目标区域进行放大时,例如,针对用户感兴趣区域进行放大,则需要用户移动电子设备将电子设备对准拍摄景物中的目标区域,否则无法实现针对目标区域的变焦操作,从而导致用户体验较差。Currently, in order to meet the needs of taking pictures in different scenarios, camera modules in electronic devices usually have zoom capabilities. The zoom capabilities can include optical zoom or digital zoom. Optical zoom refers to zooming in or out by moving the lens in the camera module. Shooting scenery; digital zoom achieves the effect of magnifying distant scenery by increasing the area of each pixel of the image; however, for optical zoom or digital zoom, as the magnification increases during the zoom process, the zoom The center point is always the center of the sensor in the electronic device; that is, the image is cropped according to the center during the entire zoom process. During the zoom process, it is cropped step by step along the central area of the sensor imaging; the center of the image is always the sensor The center area of imaging; if you need to adjust the zoom center, you need to move the direction of the electronic device so that the electronic device points to the target object in the zoom center for zooming. In this zoom processing method, when it is necessary to zoom in on the target area in the photographed scene, for example, to zoom in on the user's area of interest, the user needs to move the electronic device to align the electronic device with the target area in the photographed scene, otherwise The zoom operation for the target area cannot be implemented, resulting in poor user experience.
有鉴于此,本申请的实施例提供了一种图像处理方法,在变焦拍摄的过程中,可以实现基于目标区域的变焦;具体地,在整个变焦拍摄过程中,电子设备的变焦中心无需始终为传感器的中心,而是将变焦中心从传感器的中心平滑过渡至拍摄场景中目标区域的中心;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦,提高用户的拍摄体验。In view of this, embodiments of the present application provide an image processing method that can achieve zoom based on the target area during the zoom shooting process; specifically, during the entire zoom shooting process, the zoom center of the electronic device does not always need to be Instead of moving the center of the sensor, the zoom center smoothly transitions from the center of the sensor to the center of the target area in the shooting scene; allowing users to achieve tracking zoom for the target area in the shooting scene without moving electronic devices, improving the user's shooting experience. experience.
图7是一种适用于本申请的应用场景的示意图。本申请实施例提供的图像处理方法可以应用于电子设备的变焦拍摄;例如,电子设备的变焦拍照或者变焦录制视频的过程中。Figure 7 is a schematic diagram of an application scenario suitable for this application. The image processing method provided by the embodiment of the present application can be applied to zoom photography of an electronic device; for example, in the process of zoom photography or zoom video recording of an electronic device.
示例性地,图7中的(a)与图7中的(b)为现有的变焦处理的示意图;在电子设备拍摄目标对象时,电子设备显示如图7中的(a)所示的预览图像210,电子设备的变焦倍数是“1×”(1倍),即无变焦;电子设备检测到变焦操作后,例如,电子设备检测到2倍变焦操作后,可以显示如图7中的(b)所示的预览图像220;此时,电子设备显示变焦倍数是“2×”(2倍);其中,当电子设备响应于变焦操作,预览图像220是无变焦倍数时预览图像210的一部分。在现有的变焦处理过程中,变焦中心保持不变,即预览图像210的中心点与预览图像220的中心点相同。Exemplarily, (a) and (b) in Figure 7 are schematic diagrams of existing zoom processing; when the electronic device photographs the target object, the electronic device displays a display as shown in (a) in Figure 7 In the preview image 210, the zoom factor of the electronic device is "1×" (1 times), that is, there is no zoom; after the electronic device detects the zoom operation, for example, after the electronic device detects the 2x zoom operation, it can be displayed as shown in Figure 7 The preview image 220 shown in (b); at this time, the electronic device displays that the zoom factor is "2×" (2 times); wherein, when the electronic device responds to the zoom operation, the preview image 220 is the preview image 210 without the zoom factor. part. In the existing zoom process, the zoom center remains unchanged, that is, the center point of the preview image 210 is the same as the center point of the preview image 220 .
示例性地,图7中的(c)与图7中的(d)为本申请实施例提供的图像处理方法的示意图;在电子设备拍摄目标对象时,电子设备显示如图7中的(c)所示的预览图像230,电子设备的变焦倍数是“1×”(1倍),即无变焦;电子设备检测到开启指向性变焦的操作,开始执行本申请实施例提供的图像处理方法,可以实现基于预览图像中的目标区域的追踪变焦;例如,电子设备检测到2倍变焦操作后,可以显示如图7中的(d)所示的预览图像240;预览图像230与预览图像240的中心点可以不同。Exemplarily, (c) and (d) in Figure 7 are schematic diagrams of the image processing method provided by the embodiment of the present application; when the electronic device captures the target object, the electronic device displays a display as shown in Figure 7 (c In the preview image 230 shown in ), the zoom factor of the electronic device is "1×" (1 times), that is, there is no zoom; the electronic device detects the operation of turning on the directional zoom and starts to execute the image processing method provided by the embodiment of the present application. Tracking zoom based on the target area in the preview image can be implemented; for example, after the electronic device detects the 2x zoom operation, the preview image 240 as shown in (d) of Figure 7 can be displayed; the difference between the preview image 230 and the preview image 240 The center point can be different.
下面结合图8至图15对本申请实施例提供的图像处理方法进行详细的描述。The image processing method provided by the embodiment of the present application will be described in detail below with reference to Figures 8 to 15.
图8是本申请实施例提供的图像处理方法的界面示意图。Figure 8 is a schematic interface diagram of the image processing method provided by the embodiment of the present application.
示例性地,如图8中的(a)所示,用户可以通过单击桌面301中的“相机”应用程序的图标302,指示电子设备运行相机应用,电子设备运行相机应用程序后,显示如图8中的(b)所示的拍摄界面。或者,电子设备处于锁屏状态时,用户可以通过在电子设备的显示屏上向右滑动的手势,指示电子设备运行相机应用,电子设备可以显示如图8中的(b)所示的拍摄界面。又或者,电子设备处于锁屏状态,锁屏界面上包括相机应用程序的图标,用户通过点击相机应用程序的图标,指示电子设备开启相机应用,则电子设备可以显示图8中的(b)所示的拍摄界面。又或者,电子设备在运行其他应用时,该应用具有调用相机应用程序的权限;用户通过点击相应的控件可以指示电子设备开启相机应用程序,则电子设备可以显示图8中的(b)所示的拍摄界面。例如,电子设备正在运行即时通信类应用程序时,用户可以通过选择相机功能的控件,指示电子设备开启相机应用程序;如图8中的(b)所示,拍摄界面中可以包括取景框303、拍摄控件以及功能控件;其中,拍摄控件包括控件304,设置控件等;功能控件包括:大光圈、人像、拍照和录像等;电子设备检测到对控件304的点击操作后,开始执行本方案提供的图像处理方法,即开始执行本方案提供的智能变焦方法。For example, as shown in (a) of FIG. 8 , the user can instruct the electronic device to run the camera application by clicking the icon 302 of the "Camera" application on the desktop 301. After the electronic device runs the camera application, the following is displayed: The shooting interface shown in (b) in Figure 8 . Alternatively, when the electronic device is in the lock screen state, the user can instruct the electronic device to run the camera application by sliding to the right on the display screen of the electronic device, and the electronic device can display the shooting interface as shown in (b) of Figure 8 . Alternatively, the electronic device is in a locked screen state, and the lock screen interface includes an icon of the camera application. The user instructs the electronic device to open the camera application by clicking on the icon of the camera application, and the electronic device can display (b) in Figure 8 the shooting interface shown. Or, when the electronic device is running other applications, the application has the permission to call the camera application; the user can instruct the electronic device to open the camera application by clicking the corresponding control, and then the electronic device can display as shown in (b) in Figure 8 shooting interface. For example, when the electronic device is running an instant messaging application, the user can instruct the electronic device to open the camera application by selecting the camera function control; as shown in (b) of Figure 8, the shooting interface can include a viewfinder 303, Shooting controls and function controls; among them, the shooting controls include controls 304, setting controls, etc.; the function controls include: large aperture, portrait, photography and video recording, etc.; after the electronic device detects the click operation on the control 304, it starts to execute the functions provided by this solution. The image processing method is to start executing the intelligent zoom method provided by this solution.
在一些实现方式中,拍摄界面中还可以包括变焦倍数指示305。一般而言,电子设备的默认的变焦倍数为基本倍数,可以为“1×”。其中,变焦倍数可以理解为当前摄像头的焦距,相当于基准焦距的缩/放倍数;如图8中的(c)所示,拍摄界面306中还可以包括标尺307,标尺307可以用于指示当前的变焦倍数;用户可以在拍摄界面中拖动标尺中指示箭头,以调整手机使用的变焦倍率;例如,可以拖到标尺中的指示箭头,将相机应用程序的变焦倍率从“1×”调整至“2×”,显示如图8中的(d)所示的拍摄界面。In some implementations, the zoom factor indication 305 may also be included in the shooting interface. Generally speaking, the default zoom factor of an electronic device is the basic factor, which can be "1×". Among them, the zoom factor can be understood as the focal length of the current camera, which is equivalent to the zoom/enlargement factor of the reference focal length; as shown in (c) of Figure 8, the shooting interface 306 can also include a ruler 307, which can be used to indicate the current The zoom ratio of “2×” displays the shooting interface shown in (d) in Figure 8 .
应理解,上述以点击控件304开始本申请实施例提供的图像处理方法进行举例说明,还可以通过图8中的(b)所示的设置控件中的设置选项选择开启本申请实施例提供的图像处理方法;或者,还可以在图8中的(b)设置其他控件,电子设备检测到对其他控件的点击操作开启本申请实施例的图像处理方法。It should be understood that the image processing method provided by the embodiment of the present application is exemplified starting with clicking the control 304. The image processing method provided by the embodiment of the present application can also be selected to be opened through the setting options in the setting control shown in (b) in Figure 8 Processing method; Alternatively, other controls can be set in (b) of Figure 8 , and the electronic device detects click operations on other controls to start the image processing method according to the embodiment of the present application.
图9是本申请又一种实施例提供的图像处理方法的界面示意图。Figure 9 is a schematic interface diagram of an image processing method provided by yet another embodiment of the present application.
示例性地,如图9中的(a)所示,用户可以通过单击桌面401中的“相机”应用程序的图标402,指示电子设备运行相机应用,电子设备运行相机应用,显示如图9中的(b)所示的拍摄界面。如图9中的(b)所示,拍摄界面中可以包括取景框403、拍摄控件以及功能控件;其中,拍摄控件包括控件404,设置控件等;功能控件包括:大光圈、人像、拍照和录像等;电子设备检测到对控件404的点击操作后,开始执行本方案提供的图像处理方法。For example, as shown in (a) of Figure 9, the user can instruct the electronic device to run the camera application by clicking the icon 402 of the "Camera" application in the desktop 401, and the electronic device runs the camera application, as shown in Figure 9 The shooting interface shown in (b). As shown in (b) of Figure 9 , the shooting interface may include a viewfinder 403, shooting controls, and functional controls; the shooting controls include controls 404, setting controls, etc.; the functional controls include: large aperture, portrait, photo taking, and video recording. etc.; after the electronic device detects the click operation on the control 404, it starts to execute the image processing method provided by this solution.
在一些实现方式中,拍摄界面中还可以包括变焦倍数指示405。一般而言,电子设备的默认的变焦倍数为基本倍数,可以为“1×”。其中,变焦倍数可以理解为当前摄像头的焦距,相当于基准焦距的缩/放倍数;如图9中的(c)所示,用户可以通过在电子设备的显示屏上做出双指(或者,三指)捏合的手势,减小电子设备使用的变焦倍率;或者,用户可以通过在电子设备的显示屏上做出双指(或者,三指)向外滑动的手势,即与捏合相反方向,增大手指间的距离,增大电子设备使用的变焦倍率。例如,可以在电子设备的显示屏上通过双指向外滑的手势,将相机应用程序的变焦倍率从“1×”调整至“2×”,显示如图9中的(d)所示的拍摄界面。In some implementations, the zoom factor indication 405 may also be included in the shooting interface. Generally speaking, the default zoom factor of an electronic device is the basic factor, which can be "1×". Among them, the zoom factor can be understood as the focal length of the current camera, which is equivalent to the zoom/enlargement factor of the reference focal length; as shown in (c) in Figure 9, the user can make two fingers on the display screen of the electronic device (or, Use a pinching gesture with three fingers to reduce the zoom ratio used by the electronic device; or, the user can make a gesture of sliding outwards with two fingers (or three fingers) on the display screen of the electronic device, that is, in the opposite direction to pinching. Increase the distance between your fingers and increase the zoom ratio used by electronic devices. For example, you can adjust the zoom ratio of the camera application from "1×" to "2×" through a two-finger sliding gesture on the display screen of the electronic device, and display the shooting as shown in (d) of Figure 9 interface.
应理解,图9所示的运行相机应用程序的方式可以参见图8的相关描述,此处不再赘述。It should be understood that the method of running the camera application shown in Figure 9 can be referred to the relevant description of Figure 8 and will not be described again here.
还应理解,图8与图9所示的标尺位于取景框的底部,标尺也可以位于取景框的右侧;本申请对标尺的具体位置不作任何限定。It should also be understood that the ruler shown in Figures 8 and 9 is located at the bottom of the viewfinder frame, and the ruler can also be located on the right side of the viewfinder frame; this application does not impose any restrictions on the specific position of the ruler.
可选地,本申请的又一种实施例中电子设备可以开启智能变焦,电子设备通过识别与拍摄对象之间的距离自动调节变焦倍率;在变焦的过程中,可以执行本申请实施例提供的图像处理方法。Optionally, in another embodiment of the present application, the electronic device can turn on smart zoom, and the electronic device automatically adjusts the zoom magnification by identifying the distance between the electronic device and the photographed object; during the zooming process, the electronic device can perform the steps provided by the embodiments of the present application. Image processing methods.
本申请实施例提供了一种图像处理方法,可以应用于图像的变焦拍摄;在变焦拍摄过程中,可以实现基于目标区域的变焦,目标区域可以是指用户感兴趣的图像区域,或者,目标区域可以是指需要跟踪进行变焦的图像区域;例如,电子设备的变焦中心无需始终为传感器的中心,而是将变焦中心从传感器的中心平滑过渡至目标区域的中心;例如,如图10中的(a)所示为现有的变焦处理的过程,在变焦拍摄中变焦中心始终位于点1;图10中的(b)所示为通过本申请实施例中的图像处理方法得到的变焦中心的示意图;其中,电子设备的传感器中心为点1,图像区域410为目标区域,目标区域的中心点为点4;通过本申请实施例提供的图像处理方法,在变焦拍摄中(例如,4次变焦包括从图像1变焦到图像2、图像3与图像4)可以实现变焦中心点从点1移动至点2、点3最终变焦中心点移动至点4,从而实现基于目标区域的指向性变焦;因此,通过本申请实施例提供的图像处理方法,可以在电子设备的位置相同的情况下,针对拍摄场景中的目标区域实现追踪变焦,提高用户的拍摄体验。Embodiments of the present application provide an image processing method that can be applied to zoom shooting of images; during the zoom shooting process, zoom based on a target area can be achieved, and the target area can refer to the image area that the user is interested in, or the target area. It can refer to the image area that needs to be tracked for zooming; for example, the zoom center of the electronic device does not always need to be the center of the sensor, but the zoom center smoothly transitions from the center of the sensor to the center of the target area; for example, as shown in Figure 10 ( a) shows the existing zoom processing process. The zoom center is always located at point 1 during zoom shooting; (b) in Figure 10 shows a schematic diagram of the zoom center obtained through the image processing method in the embodiment of the present application. ; Among them, the sensor center of the electronic device is point 1, the image area 410 is the target area, and the center point of the target area is point 4; through the image processing method provided by the embodiment of the present application, during zoom shooting (for example, 4 zooms include Zooming from image 1 to image 2, image 3 and image 4) can realize the zoom center point moving from point 1 to point 2, point 3 and finally the zoom center point moves to point 4, thereby achieving directional zoom based on the target area; therefore, Through the image processing method provided by the embodiment of the present application, when the position of the electronic device is the same, tracking zoom can be achieved for the target area in the shooting scene, thereby improving the user's shooting experience.
下面结合图11,对图10中的(b)所示的基于目标区域的变焦处理过程进行详细描述。图11所示的变焦处理过程可以包括以下步骤:The zoom processing process based on the target area shown in (b) in FIG. 10 will be described in detail below with reference to FIG. 11 . The zoom processing process shown in Figure 11 may include the following steps:
步骤一:在电子设备处于基准焦距的拍摄模式下,采集图像601;在图像601中,确定图像601中的目标区域602的面积大小与目标区域602的中心点;根据目标区域602的面积大小与目标区域602的中心点可以确定目标区域602的位置。Step 1: In the shooting mode of the electronic device at the reference focal length, collect the image 601; in the image 601, determine the area size of the target area 602 in the image 601 and the center point of the target area 602; according to the area size and the center point of the target area 602 The center point of the target area 602 may determine the location of the target area 602 .
应理解,目标区域602可以是指用户感兴趣的图像区域,或者,目标区域602可以是指需要跟踪进行变焦的图像区域。It should be understood that the target area 602 may refer to an image area that the user is interested in, or the target area 602 may refer to an image area that needs to be tracked for zooming.
示例性地,电子设备的基准焦距可以是指变焦倍率为“1×”;其中,图像601的中心为A点;A点可以是基于电子设备中传感器的中心位置确定的。For example, the reference focal length of the electronic device may refer to the zoom magnification of “1×”; where the center of the image 601 is point A; point A may be determined based on the center position of the sensor in the electronic device.
示例性地,目标区域602的中心点所在的位置可以是基于以下方法得到的:For example, the location of the center point of the target area 602 can be obtained based on the following method:
实现方式一Implementation method one
示例性地,电子设备显示图像601后,检测到用户对图像601中的点击操作;响 应于用户的点击操作,可以将用户与图像601的接触点作为目标中心点;比如,用于点击图像601中的人像的下巴,则可以将人像的下巴作为目标中心点,即点D。For example, after the electronic device displays the image 601, it detects the user's click operation on the image 601; in response to the user's click operation, the user's contact point with the image 601 can be used as the target center point; for example, for clicking the image 601 If you select the chin of the portrait in the image, you can use the chin of the portrait as the target center point, which is point D.
实现方式二Implementation method two
示例性地,可以对图像进行人像识别,识别图像601中的人像;将人像的中心点作为目标区域的中心点。For example, portrait recognition can be performed on the image to identify the portrait in the image 601; the center point of the portrait is used as the center point of the target area.
实现方式三Implementation method three
示例性地,可以基于识别策略确定图像601中的优先级级别较高的图像区域,将优先级级别较高的图像区域的中心点作为目标区域的中心点。For example, an image area with a higher priority level in the image 601 may be determined based on the recognition strategy, and the center point of the image area with a higher priority level may be used as the center point of the target area.
示例性地,识别策略可以是指拍摄场景中包括类别1、类别2或者类别3等;类别1的优先级高于类别2的优先级;类别2的优先级高于类别3的优先级;比如,对于人像拍摄模式,类别1可以是指人像、类别2可以是指绿植、类别3可以是指风景(例如,天空、远山等);比如,对于风景拍摄模式,类别1可以是指风景(例如,天空、远山等)、类别2可以是指绿植、类别3可以是指人像。For example, the recognition strategy may mean that the shooting scene includes Category 1, Category 2 or Category 3, etc.; the priority of Category 1 is higher than the priority of Category 2; the priority of Category 2 is higher than the priority of Category 3; for example , for portrait shooting mode, category 1 can refer to portraits, category 2 can refer to green plants, and category 3 can refer to landscapes (for example, the sky, distant mountains, etc.); for example, for landscape shooting mode, category 1 can refer to landscapes. (For example, the sky, distant mountains, etc.) Category 2 can refer to green plants, and Category 3 can refer to portraits.
应理解,上述为对识别策略的举例说明,通过识别策略可以确定图像601中优先级较高的类别所在的图像区域,本申请对识别策略的具体内容不作任何限定。It should be understood that the above is an example of the recognition strategy. The recognition strategy can be used to determine the image area where the category with a higher priority in the image 601 is located. This application does not limit the specific content of the recognition strategy.
实现方式四Implementation method four
示例性地,电子设备可以对第一预览图像中的主体进行识别;将主体所在的图像区域的中心点作为目标区域的中心点;例如,主体可以是指第一预览图像中的人像、绿植、风景或者动物中的任意一种;在识别到第一预览图像中的主体后,可以将主体所在的图像区域的中心点作为目标区域的中心点。For example, the electronic device can identify the subject in the first preview image; use the center point of the image area where the subject is located as the center point of the target area; for example, the subject can refer to a portrait or a green plant in the first preview image. , scenery or animals; after recognizing the subject in the first preview image, the center point of the image area where the subject is located can be used as the center point of the target area.
实现方式五Implementation method five
示例性地,假设图像601中包括至少两个主体时,可以在电子设备的显示屏中显示提示信息;将用户选择的至少两个主体中的目标主体的中心点作为目标区域的中心点。For example, assuming that the image 601 includes at least two subjects, prompt information may be displayed on the display screen of the electronic device; the center point of the target subject among the at least two subjects selected by the user is used as the center point of the target area.
示例性地,假设图像601中包括人像与月亮,可以在电子设备的显示屏中显示提示信息“变焦以人像为中心”并显示选择控件“是”与“否”;若检测到用户点击控件“是”,则将人像的中心作为目标区域的中心点;若检测到用户点击控件“否”,则可以将月亮的中心作为目标区域的中心点。应理解,通过上述实现方式一、实现方式二、实现方式三、实现方式四与实现方式五对确定目标区域602的中心点进行举例说明;也可以通过其他方式确定目标区域602的中心点,本申请对此不作任何限定。For example, assuming that the image 601 includes a portrait and the moon, the prompt message "Zoom is centered on the portrait" can be displayed on the display screen of the electronic device and the selection control "Yes" or "No" is displayed; if it is detected that the user clicks the control " If it is detected that the user clicks the control "No", the center of the moon can be used as the center point of the target area. It should be understood that the above implementation methods 1, 2, 3, 4 and 5 are examples of determining the center point of the target area 602; the center point of the target area 602 can also be determined through other methods. The application does not impose any restrictions on this.
在本申请的实施例中,目标区域602的中心点为目标变焦中心,在进行变焦的过程中,可以实现将变焦中心从传感器的中心(例如,A点)平滑过渡至目标变焦中心(例如,D点);使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦。In the embodiment of the present application, the center point of the target area 602 is the target zoom center. During the zooming process, the zoom center can be smoothly transitioned from the center of the sensor (for example, point A) to the target zoom center (for example, point A). Point D); allows the user to achieve tracking zoom for the target area in the shooting scene without moving the electronic device.
可选地,可以确定图像601中目标区域602的面积大小。Optionally, the area size of the target area 602 in the image 601 may be determined.
示例性地,如图12所示,电子设备可以基于用户的指示来确定目标区域,目标区域的面积大小可为预设值。当电子设备检测到用户在屏幕上的点击操作对应的触碰点为D点时,以D点为中心确定目标区域,其中,提示框610用于标识目标区域。For example, as shown in Figure 12, the electronic device can determine the target area based on the user's instructions, and the area size of the target area can be a preset value. When the electronic device detects that the touch point corresponding to the user's click operation on the screen is point D, it determines the target area centered on point D, where the prompt box 610 is used to identify the target area.
可选地,用户可以对提示框610进行操作,以调整目标区域的大小和位置。示例 性地,用户可以拖动提示框610的顶点区域来扩大或缩小目标区域。用户也可以拖动提示框610的边,来选择不同的目标区域。Optionally, the user can operate the prompt box 610 to adjust the size and position of the target area. For example, the user can drag the vertex area of the prompt box 610 to expand or reduce the target area. The user can also drag the edge of the prompt box 610 to select different target areas.
示例性地,电子设备可以自动识别图像601中的不同图像区域,基于识别策略得到图像601中优先级较高的图像区域,目标区域602覆盖优先级较高的图像区域;如图13所示,电子设备识别图像601中包括用户图像区域620、长椅图像区域630以及鱼杆图像区域640,其中,用户图像区域620的优先级最高,则目标区域602覆盖用户图像区域620;目标区域602的面积大小可以是基于用户图像区域620确定的。For example, the electronic device can automatically identify different image areas in the image 601, and obtain an image area with a higher priority in the image 601 based on the recognition strategy, and the target area 602 covers the image area with a higher priority; as shown in Figure 13, The electronic device recognition image 601 includes a user image area 620, a bench image area 630 and a fishing rod image area 640. Among them, the user image area 620 has the highest priority, so the target area 602 covers the user image area 620; the area of the target area 602 The size may be determined based on user image area 620.
如图11所示,根据电子设备中传感器的中心位置可以确定图像601的中心点为A点,A点为单倍倍率的中心点;通过上述实现方式一至实现方式五可以确定目标区域602的中心点D点;基于目标区域602的面积大小可以确定D点对应的目标变焦倍率大小;例如,假设目标区域602的面积大小为A1,图像601的面积大小为A2,则D点可以看作是变焦倍率为
Figure PCTCN2022140810-appb-000001
的中心点。将A点与D点连接,得到A点与D点之间的连线;将A点与D点的连线进行分割;例如,可以根据相机应用程序中变焦标尺的刻度将A点与D点之间的连线进行等分;例如,若A点为单倍变焦倍率的中心,D点为2倍变焦倍率的中心,相机应用程序中变焦标尺的刻度为0.2,则可以将A点与D点之间等分为5部分,分别为1倍变焦倍率、1.2倍变焦倍率、1.4倍变焦倍率、1.6倍变焦倍率、1.8倍变焦倍率与2倍变焦倍率。
As shown in Figure 11, according to the center position of the sensor in the electronic device, the center point of the image 601 can be determined to be point A, and point A is the center point of the single magnification; through the above implementation methods one to five, the center of the target area 602 can be determined Point D; the target zoom magnification corresponding to point D can be determined based on the area size of the target area 602; for example, assuming that the area size of the target area 602 is A1 and the area size of the image 601 is A2, then point D can be regarded as a zoom The magnification is
Figure PCTCN2022140810-appb-000001
center point. Connect point A and point D to get the line between point A and point D; divide the line between point A and point D; for example, you can divide point A and point D according to the scale of the zoom ruler in the camera application The connecting lines between them are equally divided; for example, if point A is the center of single zoom magnification, point D is the center of 2x zoom magnification, and the scale of the zoom ruler in the camera application is 0.2, then point A and D can be The points are equally divided into 5 parts, namely 1x zoom magnification, 1.2x zoom magnification, 1.4x zoom magnification, 1.6x zoom magnification, 1.8x zoom magnification and 2x zoom magnification.
应理解,变焦过程中的裁切区域与变焦倍数相关;假设,单倍倍率的图像像素为N×M,检测到电子设备当前的变焦倍数为K倍变焦,则裁切区域的面积大小为
Figure PCTCN2022140810-appb-000002
为了确保目标区域显示的完整性,则裁切区域需要覆盖目标区域;由于2倍变焦倍率对应的裁切区域为1倍变焦倍率图像的1/4;因此,在目标区域602的面积大小等于或者小于1/4单倍变焦倍率对应的图像时,可以将目标区域的中心点看作是2倍变焦倍率对应的中心点。
It should be understood that the cropping area during the zoom process is related to the zoom factor; assuming that the image pixels at a single magnification are N×M, and it is detected that the current zoom factor of the electronic device is K times zoom, then the area of the cropping area is
Figure PCTCN2022140810-appb-000002
In order to ensure the integrity of the display of the target area, the cropping area needs to cover the target area; since the cropping area corresponding to 2x zoom magnification is 1/4 of the image with 1x zoom magnification; therefore, the area size of the target area 602 is equal to or When the image is smaller than the image corresponding to 1/4 single zoom magnification, the center point of the target area can be regarded as the center point corresponding to 2x zoom magnification.
在一个示例中,若目标区域602的面积大小等于或者小于1/4图像601的面积,则可以将目标区域602的中心点D点可以是2倍变焦倍率的中心点;假设,A点与D点之间的像素数量为100,A点为1倍变焦倍率对应的中心点,D点为2倍变焦倍率对应的中心点,如图11所示,可以将A点与D点之间等分为3部分,分别为中心点A、中心点B、中心点C与中心点D之间,每个变焦点之间的对角线像素间隔为100/3=33.3像素大小;中心点B可以表示1.33倍变焦的中心点;中心点C可以表示1.66倍变焦的中心点;中心点D可以表示2倍变焦的中心点。In one example, if the area size of the target area 602 is equal to or smaller than 1/4 of the area of the image 601, the center point D of the target area 602 can be the center point of the 2x zoom magnification; assuming that point A and D The number of pixels between points is 100. Point A is the center point corresponding to 1x zoom magnification, and point D is the center point corresponding to 2x zoom magnification. As shown in Figure 11, point A and point D can be equally divided. It is 3 parts, namely between center point A, center point B, center point C and center point D. The diagonal pixel interval between each zoom point is 100/3=33.3 pixel size; center point B can represent The center point of 1.33x zoom; center point C can represent the center point of 1.66x zoom; center point D can represent the center point of 2x zoom.
在本申请的实施例中,如图11所示,根据单倍变焦的中心点A与目标区域的中心点D,在变焦过程中逐渐基于中心点B与中心点C进行裁剪,实现基于图像中的目标区域的平滑变焦;由于变焦中心点是逐渐从A点移动至D点的,因此可以避免预览图像出现跳变的问题。In the embodiment of the present application, as shown in Figure 11, according to the center point A of the single zoom and the center point D of the target area, cropping is gradually performed based on the center point B and the center point C during the zooming process to achieve image-based Smooth zooming of the target area; since the zoom center point gradually moves from point A to point D, the problem of jumping in the preview image can be avoided.
在一个示例中,A点与D点之间也可以为不相等的分割;例如,可以通过插值算法将A点与D点之间的连接进行分割,比如,通过插值算法可以得到B1点与C1点,其中,A点可以表示单位变焦的中心点,D点可以表示2倍变焦的中心点,B1点可以表示1.2倍变焦的中心点;中心点C1点可以表示1.7倍变焦的中心点。In an example, points A and D can also be divided unequally; for example, the connection between point A and point D can be divided through an interpolation algorithm. For example, points B1 and C1 can be obtained through an interpolation algorithm. point, among which point A can represent the center point of unit zoom, point D can represent the center point of 2x zoom, point B1 can represent the center point of 1.2x zoom, and point C1 can represent the center point of 1.7x zoom.
应理解,上述通过以等分与不等分对A点与D点之间的分割进行举例说明;在本申请的实施例中,在变焦拍摄时,变焦中心可以从传感器的中心(例如,A点)平滑过渡至目标区域的中心(例如,D点),本申请对A点与D点之间连线的具体分割方式不作任何限定。It should be understood that the above is an example of the division between point A and point D by equal and unequal division; in the embodiment of the present application, during zoom shooting, the zoom center can be from the center of the sensor (for example, A point) smoothly transitions to the center of the target area (for example, point D). This application does not impose any restrictions on the specific division method of the connection between point A and point D.
示例性地,当电子设备处于单倍变焦时,电子设备的视场角(field of view,FOV)可以看作是原始拍摄角度范围的100%,如图11所示的图像601;当电子设备的变焦倍率调整至1.33倍变焦时,例如,电子设备的变焦中心从A点沿着AD连线移动33.3个像素大小至中心点B,电子设备的视场角变为原始拍摄角度范围的56.5%,如图像11所示的图像603;当电子设备的变焦倍率调整至1.66倍变焦时,例如,电子设备的变焦中心从B点沿着AD连线移动33.3个像素大小至中心点C,电子设备的视场角变为原始拍摄角度范围的36.3%,如图11所示的图像604;当电子设备的变焦倍率调整至2倍变焦时,例如,电子设备的变焦中心从C点沿着AD连线移动33.3个像素大小至中心点D,电子设备的视场角变为原始拍摄角度范围的25%,如图11所示的图像605。For example, when the electronic device is in single zoom, the field of view (FOV) of the electronic device can be regarded as 100% of the original shooting angle range, as shown in image 601 in Figure 11; when the electronic device When the zoom magnification is adjusted to 1.33x zoom, for example, the zoom center of the electronic device moves 33.3 pixels from point A along the AD connection to the center point B, and the field of view of the electronic device becomes 56.5% of the original shooting angle range. , as shown in image 603 in Image 11; when the zoom magnification of the electronic device is adjusted to 1.66x zoom, for example, the zoom center of the electronic device moves 33.3 pixels from point B along the line AD to the center point C, and the electronic device The field of view becomes 36.3% of the original shooting angle range, as shown in image 604 in Figure 11; when the zoom magnification of the electronic device is adjusted to 2x zoom, for example, the zoom center of the electronic device is connected from point C along AD The line moves 33.3 pixels in size to the center point D, and the field of view of the electronic device becomes 25% of the original shooting angle range, as shown in image 605 in Figure 11 .
需要说明的是,电子设备的视场角用于指示电子设备在拍摄目标对象的过程中,摄像头所能拍摄到的最大角度。若目标对象处于摄像头能够拍摄到的最大角度范围中,则目标对象反射的光照就能够被摄像头采集到,以使得目标对象的影像呈现在电子设备显示预览图像中。若目标对象处于摄像头能够拍摄到的最大角度范围之外,则目标对象反射的光照无法被摄像头采集到,这样,该目标对象的影像就无法出现在电子设备显示的预览图像中。一般而言,摄像头的视场角越大,则使用该摄像头时拍摄范围就越大;而摄像头的视场角越小,则使用该摄像头时拍摄范围就越小。视场角还可以被称为“视场范围”、“视野范围”、“视野区域”等。It should be noted that the field of view of the electronic device is used to indicate the maximum angle that the camera of the electronic device can capture when capturing a target object. If the target object is within the maximum angle range that the camera can capture, the light reflected by the target object can be collected by the camera, so that the image of the target object is presented in the electronic device display preview image. If the target object is outside the maximum angle range that the camera can capture, the light reflected by the target object cannot be collected by the camera, so the image of the target object cannot appear in the preview image displayed by the electronic device. Generally speaking, the larger the camera's field of view, the wider the shooting range when using the camera; and the smaller the camera's field of view, the smaller the shooting range when using the camera. The field of view angle can also be called "field of view range", "field of view range", "field of view area", etc.
步骤二:当电子设备进行1.33倍变焦时,由于电子设备的视场角变为原始拍摄角度范围的56.5%;因此,可以对图像601进行裁切;具体地,可以以图像601中的B点为中心点,裁剪图像601对应的视场角范围的56.5%,得到图像603。Step 2: When the electronic device zooms 1.33 times, the field of view of the electronic device becomes 56.5% of the original shooting angle range; therefore, the image 601 can be cropped; specifically, the point B in the image 601 can be As the center point, crop 56.5% of the field of view range corresponding to image 601 to obtain image 603.
步骤三:当电子设备进行1.66倍变焦时,由于电子设备的视场角变为原始拍摄角度范围的36.3%;因此,可以对图像601进行裁切;具体地,可以以图像601中的C点为中心点,裁剪图像601对应的视场角范围的36.3%,得到图像604。Step 3: When the electronic device zooms 1.66 times, the field of view of the electronic device becomes 36.3% of the original shooting angle range; therefore, the image 601 can be cropped; specifically, the point C in the image 601 can be As the center point, crop 36.3% of the field of view range corresponding to image 601 to obtain image 604.
步骤四:当电子设备进行2倍变焦时,由于电子设备的视场角变为原始拍摄角度范围的25%;因此,可以对图像601进行裁切;具体地,可以以图像601中的D点为中心点,裁剪图像601对应的视场角范围的25%,得到图像605。Step 4: When the electronic device performs 2x zoom, since the field of view of the electronic device becomes 25% of the original shooting angle range; therefore, the image 601 can be cropped; specifically, the D point in the image 601 can be As the center point, crop 25% of the field of view range corresponding to image 601 to obtain image 605.
示例性地,假设图像605为2倍变焦倍率对应的裁切区域大小,因此图像605的裁切区域大小为整个图像大小的1/4;若目标区域602的面积大小等于图像601面积大小的1/4,此时图像605与目标区域602相同;若目标区域602的面积大小小于图像601面积大小的1/4,则图像605包括目标区域602且以D点为中心点。如图11所示,图11是以目标区域602的面积大小为图像601面积大小的1/4进行举例说明,则图像605与目标区域602对应相同的图像内容。可选地,若对图像605继续进行变焦处理,即对图像601进行变焦倍率大于2的变焦处理时,可以以2倍变焦倍率对应的图像为基准进行进一步的裁切;比如,以图像605为基准,以D点为变焦中心点进行裁切。For example, assuming that the image 605 has a cropping area size corresponding to a 2x zoom magnification, the cropping area size of the image 605 is 1/4 of the entire image size; if the area size of the target area 602 is equal to 1 of the area size of the image 601 /4, at this time, the image 605 is the same as the target area 602; if the area size of the target area 602 is less than 1/4 of the area size of the image 601, then the image 605 includes the target area 602 and has point D as the center point. As shown in FIG. 11 , FIG. 11 illustrates that the area size of the target area 602 is 1/4 of the area size of the image 601 , then the image 605 and the target area 602 correspond to the same image content. Optionally, if the zoom process is continued on the image 605, that is, when the zoom process is performed on the image 601 with a zoom magnification greater than 2, further cropping can be performed based on the image corresponding to the 2x zoom magnification; for example, the image 605 is Baseline, use point D as the zoom center point for cropping.
在本申请的实施例中,在进行2倍变焦时由于电子设备的视场角是单倍变焦对应的视场角的25%,此时视场角较小,获取的像素点数量较少;若对图像601裁切后直接在电子设备中显示,则图像605对应图像的清晰度较低;进一步地,为了提高变焦处理后图像的清晰度,可以采用Remosaic方式对图像中的像素进行图像处理,得到图像605;通过采用Remosaic方式可以增加图像605中对应的像素点数量,从而提升图像的清晰度。In the embodiment of the present application, when performing 2x zoom, since the field of view of the electronic device is 25% of the field of view corresponding to single zoom, the field of view is smaller at this time, and the number of pixels acquired is smaller; If image 601 is cropped and displayed directly on an electronic device, the definition of the image corresponding to image 605 will be lower; further, in order to improve the clarity of the image after zoom processing, the Remosaic method can be used to perform image processing on the pixels in the image. , obtain image 605; by using the Remosaic method, the number of corresponding pixels in the image 605 can be increased, thereby improving the clarity of the image.
示例性地,假设电子设备的图像传感器采集的像素为4×4,图4中的(a)为通过采用Binning方式得到的1个拜耳格式的像素点;图4中的(b)为采用Remosaic方式得到的4个拜耳格式的像素点;对于相同像素数量,通过采用Remosaic方式,可以有效地提高像素点数量;从而提升图像的清晰度。For example, assuming that the pixels collected by the image sensor of the electronic device are 4×4, (a) in Figure 4 is a pixel in Bayer format obtained by using the Binning method; (b) in Figure 4 is a pixel in the Remosaic format. Four Bayer format pixels obtained by using the Remosaic method; for the same number of pixels, the number of pixels can be effectively increased by using the Remosaic method; thereby improving the clarity of the image.
应理解,若采用2×2的Remosaic方式读出图像,即对图像传感器采集的像素2×2重新排列,则在变焦倍率对应的裁切区域小于或者等于整个图像区域的1/4时,采用Remosaic方式读出图像。It should be understood that if the 2×2 Remosaic method is used to read the image, that is, the pixels collected by the image sensor are rearranged 2×2, then when the cropping area corresponding to the zoom magnification is less than or equal to 1/4 of the entire image area, use Read the image in Remosaic mode.
步骤五:在相机应用程序中显示目标区域602对应的图像。Step 5: Display the image corresponding to the target area 602 in the camera application.
示例性地,可以根据电子设备的显示屏分辨率大小,将图像605对应的图像内容调整至适合电子设备的显示规格的图像进行显示。For example, according to the resolution of the display screen of the electronic device, the image content corresponding to the image 605 can be adjusted to an image suitable for the display specifications of the electronic device for display.
例如,当电子设备的相机应用程序的拍摄模式处于普通场景时(例如,单倍变焦场景),可以采用Binning方式读出图像,提升整个图像的动态范围与感光能力;在变焦过程中,若变焦倍率处于单倍变焦至两倍变焦的范围内,可以采用Binning方式读出图像;若变焦倍率处于两倍变焦以及两倍变焦以上,为了避免出现图像的清晰度降低的问题,可以采用Remosaic方式读出图像。For example, when the shooting mode of the camera application of the electronic device is in a normal scene (for example, a single zoom scene), the Binning method can be used to read out the image to improve the dynamic range and photosensitivity of the entire image; during the zoom process, if the zoom If the magnification is in the range of single zoom to double zoom, the Binning method can be used to read the image; if the zoom magnification is in the range of double zoom and above, in order to avoid the problem of reduced image clarity, the Remosaic method can be used to read the image. out image.
可选地,上述以采用2×2的Remosaic方式读出图像进行举例说明;采用Remosaic方式读出图像还可以是采用3×3的Remosaic方式读出图像,或者采用4×4的Remosaic方式读出图像。Optionally, the above example uses the 2×2 Remosaic mode to read out the image; the Remosaic mode to read the image may also be the 3×3 Remosaic mode to read the image, or the 4×4 Remosaic mode to read the image. image.
示例性地,若采用3×3的Remosaic方式读出图像,即对图像传感器采集的像素3×3重新排列,则在变焦倍率对应的裁切区域小于或者等于整个图像区域的1/9时,采用Remosaic方式读出图像;在变焦倍率对应的裁切区域大于整个图像区域的1/9时,采用Binning方式读出图像。For example, if a 3×3 Remosaic method is used to read out the image, that is, the pixels collected by the image sensor are rearranged 3×3, then when the cropping area corresponding to the zoom magnification is less than or equal to 1/9 of the entire image area, The image is read out using the Remosaic method; when the cropping area corresponding to the zoom magnification is larger than 1/9 of the entire image area, the image is read out using the Binning method.
示例性地,若采用4×4的Remosaic方式读出图像,即对图像传感器采集的像素4×4重新排列,则在变焦倍率对应的裁切区域小于或者等于整个图像区域的1/16时,采用Remosaic方式读出图像;在变焦倍率对应的裁切区域大于整个图像区域的1/16时,采用Binning方式读出图像。For example, if the 4×4 Remosaic method is used to read out the image, that is, the pixels collected by the image sensor are rearranged 4×4, then when the cropping area corresponding to the zoom magnification is less than or equal to 1/16 of the entire image area, The image is read out using the Remosaic method; when the cropping area corresponding to the zoom magnification is larger than 1/16 of the entire image area, the image is read out using the Binning method.
应理解,上述为对采样Remosaic方式读出图像的举例描述,本申请对采样Remosaic方式读出图像的具体形式不作任何限定。It should be understood that the above is an example description of the image read using the sampling Remosaic method, and this application does not place any limitation on the specific form of the image read using the sampling Remosaic method.
本申请实施例提供的图像处理方法,在变焦拍摄的过程中可以实现基于目标区域的变焦;例如,在整个变焦拍摄过程中,无需采集变焦中心始终为传感器的中心,而是将变焦中心从传感器的中心平滑过渡至拍摄场景中目标区域的中心;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦,提高用户的拍摄体验;此外,在变焦倍率较大时,可以通过采集采用Remosaic方式读出图像,从而 避免变焦后图像的清晰度损失较大;提升变焦处理后图像的清晰度。The image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; for example, during the entire zoom shooting process, there is no need to collect the zoom center to always be the center of the sensor, but to change the zoom center from the sensor The center of the shooting scene smoothly transitions to the center of the target area in the shooting scene; allowing the user to achieve tracking zoom for the target area in the shooting scene without moving the electronic device, improving the user's shooting experience; in addition, when the zoom magnification is large, The image can be read out using the Remosaic method through acquisition, thereby avoiding a large loss of clarity of the image after zooming and improving the clarity of the image after zooming.
图14是本申请实施例提供的图像处理方法的示意图。图14所示的方法700包括步骤S710至步骤S750,下面分别对这些步骤进行详细的描述。Figure 14 is a schematic diagram of an image processing method provided by an embodiment of the present application. The method 700 shown in FIG. 14 includes steps S710 to S750, and these steps will be described in detail below.
步骤S710、启动电子设备中的相机应用程序。Step S710: Start the camera application in the electronic device.
示例性地,启动电子设备的相机应用程序可以参见图8中的(a)的相关描述,此处不再赘述。For example, to start the camera application of the electronic device, please refer to the relevant description of (a) in FIG. 8 , which will not be described again here.
步骤S720、显示第一预览图像。Step S720: Display the first preview image.
其中,第一预览图像对应的变焦倍率为第一倍率,第一预览图像的中心点为第一中心点。Wherein, the zoom magnification corresponding to the first preview image is the first magnification, and the center point of the first preview image is the first center point.
示例性地,第一预览图像对应的变焦倍率为第一倍率,第一倍率可以为单倍变焦(1×);电子设备运行相机应用程序后,可以显示单倍变焦(1×)的对应的预览图像。For example, the zoom ratio corresponding to the first preview image is the first magnification, and the first magnification can be single zoom (1×); after the electronic device runs the camera application, the corresponding zoom ratio of the single zoom (1×) can be displayed. Preview image.
例如,第一预览图像可以如图8中的(b)所示的预览图像;或者,第一预览图像可以如图9中的(b)所示的预览图像;或者,第一预览图像可以如图11所示的图像601,第一中心点可以为A点。For example, the first preview image may be a preview image as shown in (b) of Figure 8; or, the first preview image may be a preview image as shown in (b) of Figure 9; or, the first preview image may be as In the image 601 shown in Figure 11, the first center point may be point A.
步骤730、确定第二中心点。Step 730: Determine the second center point.
其中,第二中心点为目标区域的中心点,第一中心点与所述第二中心点不重合。The second center point is the center point of the target area, and the first center point does not coincide with the second center point.
应理解,第一中心点与第二中心点不重合可以是指第一中心点与第二中心点为位置不相同的两个点。It should be understood that the first center point and the second center point do not coincide with each other may mean that the first center point and the second center point are two points with different positions.
例如,如图11所示,第一中心点可以是指图像601中的A点;第二中心点可以是指目标区域602的中心点D点;或者,如图10中的(b)所示,第一中心点可以是指点1,第二中心点可以是指点4。For example, as shown in Figure 11, the first center point may refer to point A in the image 601; the second center point may refer to the center point D of the target area 602; or, as shown in (b) of Figure 10 , the first center point can be pointing point 1, and the second center point can be pointing point 4.
可选地,可以参见前述图11中实现方式一至实现方式五中确定目标区域的中心点所在位置的相关描述确定目标变焦点,此处不再赘述。Optionally, the target zoom point may be determined by referring to the related description of determining the location of the center point of the target area in Implementation Mode 1 to Implementation Mode 5 in FIG. 11 , which will not be described again here.
可选地,如图12所示,电子设备可以基于用户的指示来确定目标区域,目标区域的面积大小可为预设值。当电子设备检测到用户在屏幕上的点击操作对应的触碰点为D点时,以D点为中心确定目标区域,其中,提示框610用于标识目标区域。应理解,具体描述可以参见图12,此处不再赘述。Optionally, as shown in Figure 12, the electronic device can determine the target area based on the user's instructions, and the area size of the target area can be a preset value. When the electronic device detects that the touch point corresponding to the user's click operation on the screen is point D, it determines the target area centered on point D, where the prompt box 610 is used to identify the target area. It should be understood that the specific description can be seen in Figure 12 and will not be described again here.
可选地,如图13所示,检测到第一预览图像中的目标拍摄对象(例如,图像区域620),目标区域包括目标拍摄对象所在的图像区域,目标拍摄对象是基于所述第一预览图像中拍摄对象的优先级确定的。应理解,具体描述可以参见图13,此处不再赘述。Optionally, as shown in FIG. 13, a target shooting object (eg, image area 620) in the first preview image is detected, the target area includes the image area where the target shooting object is located, and the target shooting object is based on the first preview. The priority of the subjects in the image is determined. It should be understood that the specific description can be seen in Figure 13 and will not be described again here.
可选地,电子设备检测到第一预览图像中的第一主体,第二中心点为所述第一主体的中心点;例如,第一主体可以是指第一预览图像中的人像、绿植、风景或者动物中的任意一种;在识别到第一预览图像中的第一主体后,可以将第一主体所在的图像区域的中心点作为目标区域的中心点。Optionally, the electronic device detects the first subject in the first preview image, and the second center point is the center point of the first subject; for example, the first subject may refer to a portrait or a green plant in the first preview image. , scenery or animals; after identifying the first subject in the first preview image, the center point of the image area where the first subject is located can be used as the center point of the target area.
步骤S740、检测到第一操作。Step S740: The first operation is detected.
其中,所述第一操作指示电子设备的变焦倍率为第二变焦倍率。Wherein, the first operation indicates that the zoom factor of the electronic device is the second zoom factor.
示例性地,第一操作可以是指滑动操作,如图8中的(c)所示;或者,第一操作可以是指捏合操作,或者向外滑动操作,如图9中的(c)所示。Exemplarily, the first operation may refer to a sliding operation, as shown in (c) of Figure 8; or, the first operation may refer to a pinching operation, or an outward sliding operation, as shown in (c) of Figure 9 Show.
步骤S750、响应于第一操作,显示第二预览图像。Step S750: In response to the first operation, display the second preview image.
其中,第二预览图像对应的变焦倍率为第二倍率,第二预览图像的中心点为第二中心点。Wherein, the zoom magnification corresponding to the second preview image is the second magnification, and the center point of the second preview image is the second center point.
可选地,第二预览图像与目标区域重合;例如,第二预览图像包括目标区域;或者,第二预览图像包括目标区域的一部分。Optionally, the second preview image coincides with the target area; for example, the second preview image includes the target area; or the second preview image includes a part of the target area.
例如,如图15所示,目标区域为760、第二预览图像为770,其中,目标区域760中包括主体;第二预览图像770包括目标区域760可以是指第二预览图像770包括目标区域760与其他图像区域;即第二预览图像770中包括主体所在的图像区域与其他图像区域,如图15中的(a)所示;或者,第二预览图像770包括目标区域760可以是指第二预览图像770与目标区域760完全重合;即第二预览图像770与主体所在的图像区域重合,如图15中的(b)所示;第二预览图像770包括目标区域760中的一部分可以是指第二预览图像770包括目标区域760中的部分图像区域,即第二预览图像770包括一部分主体所在的图像区域,如图15中的(c)所示。For example, as shown in Figure 15, the target area is 760 and the second preview image is 770, where the target area 760 includes the subject; the second preview image 770 including the target area 760 may mean that the second preview image 770 includes the target area 760 and other image areas; that is, the second preview image 770 includes the image area where the subject is located and other image areas, as shown in (a) of Figure 15; or, the second preview image 770 includes the target area 760 and may refer to the second The preview image 770 completely coincides with the target area 760; that is, the second preview image 770 coincides with the image area where the subject is located, as shown in (b) of Figure 15; the second preview image 770 includes a part of the target area 760, which may mean The second preview image 770 includes a part of the image area in the target area 760 , that is, the second preview image 770 includes a part of the image area where the subject is located, as shown in (c) of FIG. 15 .
应理解,第二预览图像中包括目标区域,可以是指第二预览图像中覆盖目标区域;或者,第二预览图像中包括目标区域的图像内容。It should be understood that including the target area in the second preview image may mean that the target area is covered in the second preview image; or that the second preview image includes the image content of the target area.
示例性地,图15所示的目标区域760可以是指图10中的(b)所示的图像区域410;图15中的(a)所示的第二预览图像可以是指如图10中的(b)所示的图像3,图像3包括目标区域410与其他图像区域;其中,第二预览图像770的中心点可以是指点3,目标区域760的中心点可以是指点4;图15中的(b)所示的第二预览图像可以是指如图10中的(b)所示的图像4,图像4与目标区域410重合;即第二预览图像770的中心点与目标区域760的中心点重合,中心点为点4;图15中的(c)所示的第二预览图像770的中心点与目标区域760的中心点重合,中心点可以是指如图10中的(b)所示的图像区域410的中心点,即中心点为点4。For example, the target area 760 shown in Figure 15 may refer to the image area 410 shown in (b) in Figure 10; the second preview image shown in (a) in Figure 15 may refer to the image area 410 shown in (b) in Figure 10 Image 3 shown in (b), image 3 includes the target area 410 and other image areas; wherein, the center point of the second preview image 770 can be the pointing point 3, and the center point of the target area 760 can be the pointing point 4; in Figure 15 The second preview image shown in (b) may refer to image 4 as shown in (b) in Figure 10 , and image 4 coincides with the target area 410; that is, the center point of the second preview image 770 and the center point of the target area 760 The center points coincide with each other, and the center point is point 4; the center point of the second preview image 770 shown in (c) in Figure 15 coincides with the center point of the target area 760, and the center point may refer to (b) in Figure 10 The center point of the image area 410 shown is point 4.
例如,第二预览图像可以如图8中的(d)所示的预览图像;或者,第二预览图像可以如图9中的(d)所示的预览图像;又或者,第二预览图像可以如图11所示的图像605。For example, the second preview image may be a preview image as shown in (d) of Figure 8; or, the second preview image may be a preview image as shown in (d) of Figure 9; or, the second preview image may be Image 605 shown in Figure 11.
可选地,在显示第一预览图像与第二预览图像时,电子设备所处的位置相同。Optionally, when the first preview image and the second preview image are displayed, the position of the electronic device is the same.
应理解,电子设备所处的位置相同可以是指电子设备未发生偏转、平移、翻转等移动。It should be understood that the same position of the electronic device may mean that the electronic device does not undergo deflection, translation, flipping, or other movement.
可选地,上述图像处理方法还包括:检测到第二操作,第二操作指示所述电子设备的变焦倍率为第三倍率;响应于所述第二操作,显示第三预览图像,第三预览图像的中心点为第三中心点,第三中心点在所述第一中心点与第二中心点的连线上。例如,如图11所示,第三中心点可以是指B点,第三预览图像可以是指图像603;或者,第三中心点可以是指C点,第三预览图像可以是指图像604。Optionally, the above image processing method further includes: detecting a second operation, the second operation indicates that the zoom magnification of the electronic device is a third magnification; in response to the second operation, displaying a third preview image, the third preview The center point of the image is the third center point, and the third center point is on the line connecting the first center point and the second center point. For example, as shown in Figure 11, the third center point may refer to point B, and the third preview image may refer to image 603; or, the third center point may refer to point C, and the third preview image may refer to image 604.
可选地,第一中心点与第二中心点的连线上包括N个中心点,N个中心点中的每个中心点与至少一个变焦倍率对应,N为大于或者等于2的整数;例如,如图11所示,第一中心点为A点,第二中心点为D点,N个中心点可以包括A点、B点、C点以及D点;具体描述可以参见图11的相关描述,此处不再赘述。Optionally, the connection between the first center point and the second center point includes N center points, each of the N center points corresponds to at least one zoom magnification, and N is an integer greater than or equal to 2; for example , as shown in Figure 11, the first center point is point A, the second center point is point D, and the N center points can include point A, point B, point C and point D; for detailed description, please refer to the relevant description of Figure 11 , which will not be described again here.
例如,N个中心点包括第一中心点、第二中心点与N-2个中心点;其中,N-2个 中心点位于第一中心点与第二中心点的连线上,N-2个中心点中的每个中心点可以对应一个变焦倍率;第二中心点可以对应至少一个变焦倍率;例如,目标区域的面积(例如,A1)与第一预览图像(例如,A2)的面积之间的比值大小位于(1/9,1/4)之间,则第二中心点可以为2倍变焦倍率的中心点,或者第二中心点为
Figure PCTCN2022140810-appb-000003
倍变焦倍率的中心点。
For example, N center points include the first center point, the second center point and N-2 center points; among them, N-2 center points are located on the line connecting the first center point and the second center point, N-2 Each of the center points may correspond to one zoom factor; the second center point may correspond to at least one zoom factor; for example, the area of the target area (for example, A1) and the area of the first preview image (for example, A2) are The ratio between them is between (1/9, 1/4), then the second center point can be the center point of 2x zoom magnification, or the second center point can be
Figure PCTCN2022140810-appb-000003
The center point of the zoom magnification.
可选地,可以将第一中心点与目标中心点之间的连线进行等分,得到N个中心点;或者,根据插值算法对第一中心点与所述目标中心点的连线进行划分,得到N个中心点;具体描述可以参见图11的相关描述,此处不再赘述。Optionally, the line connecting the first center point and the target center point can be equally divided to obtain N center points; or, the line connecting the first center point and the target center point can be divided according to an interpolation algorithm. , N center points are obtained; for detailed description, please refer to the relevant description in Figure 11, and will not be repeated here.
可选地,响应于所述第一操作,显示第二预览图像,包括:Optionally, in response to the first operation, displaying a second preview image includes:
若目标区域的面积与第一预览图像的面积之间的比值小于或者等于第一预设阈值,采用第一像素合并方式显示第二预览图像;If the ratio between the area of the target area and the area of the first preview image is less than or equal to the first preset threshold, use the first pixel merging method to display the second preview image;
若目标区域的面积与第一预览图像的面积之间的比值大于第一预设阈值,采用第二像素合并方式显示所述第二预览图像。其中,第一像素合并方式可以是指采用Remosaic方式读出图像,如图4所示;第二像素合并方式可以是指采用Binning方式读出图像,如图1至图3所示。If the ratio between the area of the target area and the area of the first preview image is greater than the first preset threshold, the second preview image is displayed using the second pixel combining method. The first pixel merging method may refer to using the Remosaic method to read out the image, as shown in Figure 4; the second pixel merging method may refer to using the Binning method to read the image, as shown in Figures 1 to 3.
示例性地,若目标区域的面积与第一预览图像的面积之间的比值小于或者等于第一预设阈值,电子设备采用第二倍率对应的裁切区域对第一预览图像进行裁切处理,得到第一图像区域,第一图像区域中包括M个像素;对M个像素进行重新排列处理,得到K个像素,M、K均为正整数,K大于M;基于K个像素显示第二预览图像。For example, if the ratio between the area of the target area and the area of the first preview image is less than or equal to the first preset threshold, the electronic device uses the cropping area corresponding to the second magnification to crop the first preview image, Obtain the first image area, which includes M pixels; rearrange the M pixels to obtain K pixels, M and K are both positive integers, and K is greater than M; display the second preview based on the K pixels image.
示例性地,若所述目标区域的面积与所述第一预览图像的面积之间的比值大于第一预设阈值,采用第二倍率对应的裁切区域对第一预览图像进行裁切处理,得到第一图像区域,第一图像区域中包括M个像素;对M个像素进行合并处理,得到H个像素,M、H均为正整数,H小于M;基于H个像素显示第二预览图像。For example, if the ratio between the area of the target area and the area of the first preview image is greater than the first preset threshold, the first preview image is cropped using the cropping area corresponding to the second magnification, Obtain the first image area, which includes M pixels; merge the M pixels to obtain H pixels, M and H are both positive integers, and H is less than M; display the second preview image based on the H pixels .
例如,若第一像素合并并方式为采用2×2的Remosaic方式读出图像,则第一预设阈值为1/4。For example, if the first pixel binning method is to read the image using a 2×2 Remosaic method, the first preset threshold is 1/4.
例如,若第一像素合并并方式为采用3×3的Remosaic方式读出图像,则第一预设阈值为1/9。For example, if the first pixel binning method is to read the image using a 3×3 Remosaic method, the first preset threshold is 1/9.
例如,若第一像素合并并方式为采用4×4的Remosaic方式读出图像,则第一预设阈值为1/16。For example, if the first pixel binning method is to read the image using a 4×4 Remosaic method, the first preset threshold is 1/16.
可选地,第二倍率是基于目标区域的面积与第一预览图像的面积之间的比值确定的。Optionally, the second magnification is determined based on a ratio between the area of the target area and the area of the first preview image.
例如,若目标区域的面积(例如,图11所示的图像602)与第一预览图像的面积(例如,图11所示的图像601)的之间的比值小于或者等于1/4,则第二倍率为2倍倍率,即2倍变焦倍率。For example, if the ratio between the area of the target area (for example, image 602 shown in Figure 11) and the area of the first preview image (for example, image 601 shown in Figure 11) is less than or equal to 1/4, then the The second magnification is 2x magnification, that is, 2x zoom magnification.
例如,若目标区域的面积(例如,图11所示的图像602)与第一预览图像的面积(例如,图11所示的图像601)的之间的比值小于或者等于1/9,则第二倍率为3倍倍率,即3倍变焦倍率。For example, if the ratio between the area of the target area (for example, image 602 shown in Figure 11) and the area of the first preview image (for example, image 601 shown in Figure 11) is less than or equal to 1/9, then the 2x is 3x magnification, that is, 3x zoom magnification.
例如,若目标区域的面积(例如,图11所示的图像602)与第一预览图像的面积(例如,图11所示的图像601)的之间的比值小于或者等于1/16,则第二倍率为4倍 倍率,即4倍变焦倍率。For example, if the ratio between the area of the target area (for example, image 602 shown in Figure 11) and the area of the first preview image (for example, image 601 shown in Figure 11) is less than or equal to 1/16, then the 2x is 4x magnification, that is, 4x zoom magnification.
本申请实施例提供的图像处理方法,在变焦拍摄的过程中,可以实现基于目标区域的变焦;具体地,在整个变焦拍摄过程中,电子设备的变焦中心无需始终为传感器的中心,而是将变焦中心从传感器的中心平滑过渡至拍摄场景中目标区域的中心;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦,提高用户的拍摄体验。The image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; specifically, during the entire zoom shooting process, the zoom center of the electronic device does not always need to be the center of the sensor, but will be The zoom center smoothly transitions from the center of the sensor to the center of the target area in the shooting scene; allowing users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices, improving the user's shooting experience.
图16是本申请实施例提供的图像处理方法的示意图。图16所示的方法800包括步骤S801至步骤S808,下面分别对这些步骤进行详细的描述。Figure 16 is a schematic diagram of an image processing method provided by an embodiment of the present application. The method 800 shown in Figure 16 includes steps S801 to S808, and these steps will be described in detail below.
步骤S801、电子设备运行相机应用程序。Step S801: The electronic device runs the camera application.
示例性地,运行电子设备的相机应用程序可以参见图8中的(a)的相关描述,此处不再赘述。For example, to run the camera application of the electronic device, please refer to the relevant description of (a) in FIG. 8 , which will not be described again here.
步骤S802、电子设备显示第一预览图像,确定第一变焦中心。Step S802: The electronic device displays the first preview image and determines the first zoom center.
应理解,第一变焦中心可以是指图14所示的第一中心点。It should be understood that the first zoom center may refer to the first center point shown in FIG. 14 .
示例性地,第一预览图像对应的变焦倍率为第一倍率,第一倍率可以为单倍变焦(1×);电子设备运行相机应用程序后,可以显示单倍变焦(1×)的对应的预览图像。For example, the zoom ratio corresponding to the first preview image is the first magnification, and the first magnification can be single zoom (1×); after the electronic device runs the camera application, the corresponding zoom ratio of the single zoom (1×) can be displayed. Preview image.
例如,第一预览图像可以如图8中的(b)所示的预览图像;或者,第一预览图像可以如图9中的(b)所示的预览图像;或者,第一预览图像可以如图11所示的图像601。For example, the first preview image may be a preview image as shown in (b) of Figure 8; or, the first preview image may be a preview image as shown in (b) of Figure 9; or, the first preview image may be as Image 601 shown in Figure 11.
例如,如图11所示,第一中心点可以是指图像601中的A点;或者,如图10中的(b)所示,第一中心点可以是指点1。For example, as shown in FIG. 11 , the first center point may refer to point A in the image 601 ; or, as shown in (b) of FIG. 10 , the first center point may refer to point 1 .
步骤S803、根据第一预览图像确定目标变焦中心与目标区域。Step S803: Determine the target zoom center and target area according to the first preview image.
应理解,目标变焦中心可以是指图14所示的第二中心点。It should be understood that the target zoom center may refer to the second center point shown in FIG. 14 .
例如,如图11所示,第二中心点可以是指目标区域602中的D点;或者,如图10中的(b)所示,第二中心点可以是指点4。For example, as shown in FIG. 11 , the second center point may refer to point D in the target area 602 ; or, as shown in (b) of FIG. 10 , the second center point may be pointing point 4 .
可选地,检测到用户对第一预览图像的第一操作,根据第一操作确定目标变焦中心点与目标区域。Optionally, a first operation of the user on the first preview image is detected, and the target zoom center point and the target area are determined according to the first operation.
示例性地,第一操作可以是指用于点击第一预览图像的操作,目标变焦中心点可以是指用户在第一预览图像中的触碰点;如图11所示,目标变焦中心可以是指图像601中的D点。For example, the first operation may refer to an operation for clicking the first preview image, and the target zoom center point may refer to the user's touch point in the first preview image; as shown in Figure 11, the target zoom center may be Refers to point D in image 601.
示例性地,如图12所示,检测到用户对电子设备的点击操作,目标中心点为用户与第一预览图像的触碰点(例如,D点);基于目标中心点与预设图像区域大小(例如,图像区域610)确定目标区域。应理解,具体描述可以参见图12,此处不再赘述。Exemplarily, as shown in Figure 12, the user's click operation on the electronic device is detected, and the target center point is the touch point between the user and the first preview image (for example, point D); based on the target center point and the preset image area The size (eg, image area 610) determines the target area. It should be understood that the specific description can be seen in Figure 12 and will not be described again here.
示例性地,如图13所示,检测到第一预览图像中的目标拍摄对象(例如,图像区域620),目标区域包括目标拍摄对象所在的图像区域,目标拍摄对象是基于所述第一预览图像中拍摄对象的优先级确定的。应理解,具体描述可以参见图13,此处不再赘述。Exemplarily, as shown in FIG. 13, a target shooting object (eg, image area 620) in the first preview image is detected, the target area includes the image area where the target shooting object is located, and the target shooting object is based on the first preview. The priority of the subjects in the image is determined. It should be understood that the specific description can be seen in Figure 13 and will not be described again here.
可选地,目标变焦中心可以是指拍摄场景中目标区域的中心点;目标区域可以是指用户感兴趣的图像区域,或者,目标区域可以是指需要跟踪进行变焦的图像区域; 确定目标变焦中心点的具体方法可以参见图像11中确定目标区域的中心点的实现方式一至实现方式五的相关描述,此处不再赘述。Optionally, the target zoom center may refer to the center point of the target area in the shooting scene; the target area may refer to the image area that the user is interested in, or the target area may refer to the image area that needs to be tracked for zooming; Determine the target zoom center For the specific method of the point, please refer to the relevant descriptions of the first to fifth implementation methods of determining the center point of the target area in image 11, which will not be described again here.
可选地,可以基于目标区域的面积大小确定目标变焦中心点对应的变焦倍率。参见图11中的相关描述,此处不再赘述。Optionally, the zoom magnification corresponding to the target zoom center point may be determined based on the area size of the target area. See the relevant description in Figure 11, which will not be described again here.
步骤S804、根据第一变焦中心与目标变焦中心得到N个变焦中心点。Step S804: Obtain N zoom center points according to the first zoom center and the target zoom center.
示例性地,可以将第一变焦中心与目标变焦中心之间的连线进行等分,得到N个变焦中心;或者,也可以通过插值算法将第一变焦中心与目标变焦中心之间的连接进行分割,得到N个变焦中心。参见图11中的相关描述,此处不再赘述。For example, the connection between the first zoom center and the target zoom center can be equally divided to obtain N zoom centers; or, the connection between the first zoom center and the target zoom center can also be performed through an interpolation algorithm. Divide and obtain N zoom centers. See the relevant description in Figure 11, which will not be described again here.
可选地,N与相机中变焦的标尺刻度相关;比如,第一变焦中心点对应1倍变焦倍率,目标变焦中心点对应2倍变焦倍率;相机中1倍变焦至2倍变焦的标尺刻度为5份,即从1倍变焦至2倍变焦的过程为1×~1.2×~1.4×~1.6×~2×,则N可以等于5。Optionally, N is related to the zoom ruler scale in the camera; for example, the first zoom center point corresponds to 1x zoom magnification, and the target zoom center point corresponds to 2x zoom magnification; the scale scale from 1x zoom to 2x zoom in the camera is 5 parts, that is, the process from 1x zoom to 2x zoom is 1×~1.2×~1.4×~1.6×~2×, then N can be equal to 5.
步骤S805、判断当前变焦倍率是否满足预设条件;若当前变焦倍率满足预设条件,则执行步骤S806;若当前变焦倍率不满足预设条件,则执行步骤S807。Step S805: Determine whether the current zoom ratio meets the preset condition; if the current zoom ratio meets the preset condition, execute step S806; if the current zoom ratio does not satisfy the preset condition, execute step S807.
示例性地,若第一像素合并方式是指4×4的Remosaic方式读出图像,则预设条件是指当前变焦倍率大于或者等于2倍变焦倍率;若当前的变焦倍率大于或者等于2倍变焦倍率,则当前变焦倍率满足预设条件,执行步骤S806;若当前的变焦倍率小于2倍变焦倍率,则当前变焦倍率满足预设条件,执行步骤S807。For example, if the first pixel binning method refers to the 4×4 Remosaic method for reading the image, the preset condition refers to that the current zoom ratio is greater than or equal to 2x zoom ratio; if the current zoom ratio is greater than or equal to 2x zoom magnification, then the current zoom magnification satisfies the preset condition, and step S806 is executed; if the current zoom magnification is less than 2 times the zoom magnification, then the current zoom magnification satisfies the preset condition, and step S807 is executed.
步骤S806、采样第一像素合并方式生成第二预览图像。Step S806: Sampling the first pixel combining method to generate a second preview image.
示例性地,第一像素合并方式可以是指采用Remosaic方式读出图像。For example, the first pixel combining method may refer to using the Remosaic method to read out the image.
步骤S807、采样第二像素合并方式生成第二预览图像。Step S807: Sampling the second pixel combining method to generate a second preview image.
示例性地,第二像素合并方式可以是指采用Binning方式读出图像。For example, the second pixel binning method may refer to using the Binning method to read out the image.
其中,关于采用Remosaic方式读出图像与采用Binning方式读出图像前述已经说明,此处不再赘述。Among them, the use of the Remosaic method to read out the image and the use of the Binning method to read the image have been described above and will not be repeated here.
步骤S808、显示第二预览图像,第二预览图像对应的变焦倍率为当前变焦倍率。Step S808: Display a second preview image, and the zoom factor corresponding to the second preview image is the current zoom factor.
本申请实施例提供的图像处理方法,在变焦拍摄的过程中,可以实现基于目标区域的变焦;具体地,在整个变焦拍摄过程中,电子设备的变焦中心无需始终为传感器的中心,而是将变焦中心从传感器的中心平滑过渡至拍摄场景中目标区域的中心;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦,提高用户的拍摄体验。The image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; specifically, during the entire zoom shooting process, the zoom center of the electronic device does not always need to be the center of the sensor, but will be The zoom center smoothly transitions from the center of the sensor to the center of the target area in the shooting scene; allowing users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices, improving the user's shooting experience.
可选地,若第一像素合并方式是指3×3的Remosaic方式读出图像,此时,若目标区域的面积大小小于或者等于第一预览图像的面积大小的1/9,则目标变焦中心对应的变焦倍率为3倍变焦倍率;若目标区域的面积大小大于第一预览图像的面积大小的1/9,则目标变焦中点对应的变焦倍率为
Figure PCTCN2022140810-appb-000004
其中,A1表示目标区域的面积大小;A2表示第一预览图像的大小,预设条件是指当前变焦倍率大于或者等于3倍变焦倍率。
Optionally, if the first pixel combining method refers to the 3×3 Remosaic method to read the image, at this time, if the area size of the target area is less than or equal to 1/9 of the area size of the first preview image, then the target zoom center The corresponding zoom magnification is 3 times the zoom magnification; if the area size of the target area is greater than 1/9 of the area size of the first preview image, the zoom magnification corresponding to the target zoom midpoint is
Figure PCTCN2022140810-appb-000004
Among them, A1 represents the area size of the target area; A2 represents the size of the first preview image, and the preset condition refers to that the current zoom magnification is greater than or equal to 3 times the zoom magnification.
可选地,若第一像素合并方式是指4×4的Remosaic方式读出图像,此时,若目标区域的面积大小小于或者等于第一预览图像的面积大小的1/16,则目标变焦中心对应的变焦倍率为4倍变焦倍率;若目标区域的面积大小大于第一预览图像的面积大小的1/16,则目标变焦中点对应的变焦倍率为
Figure PCTCN2022140810-appb-000005
其中,A1表示目标区域的面积 大小;A2表示第一预览图像的大小,预设条件是指当前变焦倍率大于或者等于4倍变焦倍率。
Optionally, if the first pixel merging method refers to the 4×4 Remosaic method to read out the image, at this time, if the area size of the target area is less than or equal to 1/16 of the area size of the first preview image, then the target zoom center The corresponding zoom magnification is 4x zoom magnification; if the area size of the target area is greater than 1/16 of the area size of the first preview image, the zoom magnification corresponding to the target zoom midpoint is
Figure PCTCN2022140810-appb-000005
Among them, A1 represents the area size of the target area; A2 represents the size of the first preview image, and the preset condition refers to that the current zoom magnification is greater than or equal to 4 times the zoom magnification.
本申请实施例提供的图像处理方法,在变焦拍摄的过程中可以实现基于目标区域的变焦;例如,在整个变焦拍摄过程中,无需采集变焦中心始终为传感器的中心,而是将变焦中心从传感器的中心平滑过渡至拍摄场景中目标区域的中心;使得用户在无需移动电子设备的情况下,针对拍摄场景中的目标区域实现追踪变焦,提高用户的拍摄体验。此外,在变焦倍率较大时,可以采用第一像素合并方式读出图像,从而避免变焦后图像的清晰度损失较大;提升变焦处理后图像的清晰度。The image processing method provided by the embodiment of the present application can realize zoom based on the target area during the zoom shooting process; for example, during the entire zoom shooting process, there is no need to collect the zoom center to always be the center of the sensor, but to change the zoom center from the sensor The center of the camera smoothly transitions to the center of the target area in the shooting scene; allowing users to achieve tracking zoom on the target area in the shooting scene without moving electronic devices, improving the user's shooting experience. In addition, when the zoom magnification is large, the first pixel combining method can be used to read out the image, thereby avoiding a large loss of clarity of the zoomed image and improving the clarity of the zoomed image.
应理解,上述举例说明是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的上述举例说明,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本申请实施例的范围内。It should be understood that the above examples are to help those skilled in the art understand the embodiments of the present application, but are not intended to limit the embodiments of the present application to the specific numerical values or specific scenarios illustrated. Those skilled in the art can obviously make various equivalent modifications or changes based on the above examples, and such modifications or changes also fall within the scope of the embodiments of the present application.
上文结合图1至图16详细描述了本申请实施例提供的图像处理方法;下面将结合图17与图18详细描述本申请的装置实施例。应理解,本申请实施例中的装置可以执行前述本申请实施例的各种方法,即以下各种产品的具体工作过程,可以参考前述方法实施例中的对应过程。The image processing method provided by the embodiment of the present application is described in detail above with reference to Figures 1 to 16; below, the device embodiment of the present application will be described in detail with reference to Figures 17 and 18. It should be understood that the devices in the embodiments of the present application can perform various methods of the foregoing embodiments of the present application, that is, for the specific working processes of the following various products, reference can be made to the corresponding processes in the foregoing method embodiments.
图17是本申请实施例提供的一种电子设备的结构示意图。该电子设备900包括处理模块910与显示模块920。Figure 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application. The electronic device 900 includes a processing module 910 and a display module 920 .
其中,处理模块910用于启动电子设备中的相机应用程序;显示模块920用于显示第一预览图像,第一预览图像对应的变焦倍率为第一倍率,第一预览图像的中心点为第一中心点;处理模块910还用于确定第一预览图像中的第二中心点,第二中心点为目标区域的中心点,第一中心点与第二中心点不重合;检测到第一操作,第一操作指示电子设备的变焦倍率为第二倍率;显示模块920还用于响应于第一操作,显示第二预览图像,第二预览图像的中心点为第二中心点。Among them, the processing module 910 is used to start the camera application in the electronic device; the display module 920 is used to display the first preview image, the zoom magnification corresponding to the first preview image is the first magnification, and the center point of the first preview image is the first Center point; the processing module 910 is also used to determine a second center point in the first preview image, the second center point is the center point of the target area, and the first center point and the second center point do not coincide; the first operation is detected, The first operation indicates that the zoom magnification of the electronic device is the second magnification; the display module 920 is further configured to display a second preview image in response to the first operation, and the center point of the second preview image is the second center point.
可选地,作为一个实施例,所述第二预览图像与所述目标区域重合。Optionally, as an embodiment, the second preview image coincides with the target area.
可选地,作为一个实施例,所述第二预览图像包括所述目标区域。Optionally, as an embodiment, the second preview image includes the target area.
可选地,作为一个实施例,所述第二预览图像包括所述目标区域的一部分。Optionally, as an embodiment, the second preview image includes a part of the target area.
可选地,作为一个实施例,在显示所述第一预览图像与所述第二预览图像时,所述电子设备所处的位置相同。Optionally, as an embodiment, when the first preview image and the second preview image are displayed, the position of the electronic device is the same.
可选地,作为一个实施例,所述处理模块910还用于:Optionally, as an embodiment, the processing module 910 is also used to:
检测到第二操作,所述第二操作指示所述电子设备的变焦倍率为第三倍率;A second operation is detected, the second operation indicating that the zoom magnification of the electronic device is a third magnification;
响应于所述第二操作,显示第三预览图像,所述第三预览图像的中心点为第三中心点,所述第三中心点在所述第一中心点与所述第二中心点的连线上。In response to the second operation, a third preview image is displayed, the center point of the third preview image is a third center point, and the third center point is between the first center point and the second center point. Online.
可选地,作为一个实施例,所述第一中心点与所述第二中心点的连线上包括N个中心点,所述N个中心点中的每个中心点与至少一个变焦倍率对应,N为大于或者等于2的整数。Optionally, as an embodiment, the connection between the first center point and the second center point includes N center points, and each of the N center points corresponds to at least one zoom magnification. , N is an integer greater than or equal to 2.
可选地,作为一个实施例,所述处理模块910还用于:Optionally, as an embodiment, the processing module 910 is also used to:
对所述第一中心点与所述第二中心点的连线进行等分,得到所述N个中心点。The line connecting the first center point and the second center point is equally divided to obtain the N center points.
可选地,作为一个实施例,所述处理模块910还用于:Optionally, as an embodiment, the processing module 910 is also used to:
根据插值算法对所述第一中心点与所述第二中心点的连线进行分割,得到所述N个中心点。The line connecting the first center point and the second center point is divided according to an interpolation algorithm to obtain the N center points.
可选地,作为一个实施例,所述处理模块910具体用于:Optionally, as an embodiment, the processing module 910 is specifically used to:
若所述目标区域的面积和所述第一预览图像的面积之间的比值小于或者等于第一预设阈值,采用第一像素合并方式显示所述第二预览图像;If the ratio between the area of the target area and the area of the first preview image is less than or equal to the first preset threshold, display the second preview image using a first pixel merging method;
若所述目标区域的面积和所述第一预览图像的面积之间的比值大于第一预设阈值,采用第二像素合并方式显示所述第二预览图像。If the ratio between the area of the target area and the area of the first preview image is greater than the first preset threshold, the second preview image is displayed using a second pixel combining method.
可选地,作为一个实施例,所述处理模块910具体用于:Optionally, as an embodiment, the processing module 910 is specifically used to:
采用所述第二倍率对应的裁切区域对所述第一预览图像进行裁切处理,得到第一图像区域,所述第一图像区域中包括M个像素;Using the cutting area corresponding to the second magnification to perform cutting processing on the first preview image, a first image area is obtained, and the first image area includes M pixels;
对所述M个像素进行重新排列处理,得到K个像素,M、K均为正整数,K大于M;The M pixels are rearranged to obtain K pixels, M and K are both positive integers, and K is greater than M;
基于所述K个像素显示所述第二预览图像。The second preview image is displayed based on the K pixels.
可选地,作为一个实施例,所述处理模块910具体用于:Optionally, as an embodiment, the processing module 910 is specifically used to:
采用所述第二倍率对应的裁切区域对所述第一预览图像进行裁切处理,得到第一图像区域,所述第一图像区域中包括M个像素;Using the cutting area corresponding to the second magnification to perform cutting processing on the first preview image, a first image area is obtained, and the first image area includes M pixels;
对所述M个像素进行合并处理,得到H个像素,M、H均为正整数,H小于M;The M pixels are merged to obtain H pixels, M and H are both positive integers, and H is less than M;
基于所述H个像素显示第二预览图像。A second preview image is displayed based on the H pixels.
可选地,作为一个实施例,所述处理模块910具体用于:Optionally, as an embodiment, the processing module 910 is specifically used to:
检测到用户对所述第一预览图像的点击操作,所述第二中心点为所述用户与所述电子设备的触碰点。The user's click operation on the first preview image is detected, and the second center point is the touch point between the user and the electronic device.
可选地,作为一个实施例,所述处理模块910具体用于:Optionally, as an embodiment, the processing module 910 is specifically used to:
检测到所述第一预览图像中的第一主体,所述第二中心点为所述第一主体的中心点。The first subject in the first preview image is detected, and the second center point is the center point of the first subject.
可选地,作为一个实施例,所述第二倍率是基于所述目标区域的面积与所述第一预览图像的面积之间的比值确定的。Optionally, as an embodiment, the second magnification is determined based on the ratio between the area of the target area and the area of the first preview image.
可选地,作为一个实施例,若所述目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于1/4,所述第二倍率为2倍倍率。Optionally, as an embodiment, if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/4, the second magnification is 2 times the magnification.
可选地,作为一个实施例,若所述目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于1/9,所述第二倍率为3倍倍率。Optionally, as an embodiment, if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/9, the second magnification is 3 times the magnification.
可选地,作为一个实施例,若所述目标区域的面积与所述第一预览图像的面积的之间的比值小于或者等于1/16,所述第二倍率为4倍倍率。Optionally, as an embodiment, if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/16, the second magnification is 4 times the magnification.
需要说明的是,上述电子设备900以功能模块的形式体现。这里的术语“模块”可以通过软件和/或硬件形式实现,对此不作具体限定。It should be noted that the above-mentioned electronic device 900 is embodied in the form of a functional module. The term "module" here can be implemented in the form of software and/or hardware, and is not specifically limited.
例如,“模块”可以是实现上述功能的软件程序、硬件电路或二者结合。所述硬件电路可能包括应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。For example, a "module" may be a software program, a hardware circuit, or a combination of both that implements the above functions. The hardware circuit may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (such as a shared processor, a dedicated processor, or a group processor) for executing one or more software or firmware programs. etc.) and memory, merged logic circuitry, and/or other suitable components to support the described functionality.
因此,在本申请的实施例中描述的各示例的单元,能够以电子硬件、或者计算机 软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Therefore, the units of each example described in the embodiments of the present application can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered beyond the scope of this application.
图18示出了本申请提供的一种电子设备的结构示意图。图18中的虚线表示该单元或该模块为可选的;电子设备1100可以用于实现上述方法实施例中描述的方法。Figure 18 shows a schematic structural diagram of an electronic device provided by this application. The dotted line in Figure 18 indicates that this unit or module is optional; the electronic device 1100 can be used to implement the method described in the above method embodiment.
电子设备1100包括一个或多个处理器1101,该一个或多个处理器1101可支持电子设备1100实现方法实施例中的图像处理方法。处理器1101可以是通用处理器或者专用处理器。例如,处理器1101可以是中央处理器(central processing unit,CPU)、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件,如分立门、晶体管逻辑器件或分立硬件组件。The electronic device 1100 includes one or more processors 1101, and the one or more processors 1101 can support the electronic device 1100 to implement the image processing method in the method embodiment. Processor 1101 may be a general-purpose processor or a special-purpose processor. For example, the processor 1101 may be a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), or a field programmable gate array (field programmable). gate array, FPGA) or other programmable logic devices, such as discrete gates, transistor logic devices, or discrete hardware components.
处理器1101可以用于对电子设备1100进行控制,执行软件程序,处理软件程序的数据。电子设备1100还可以包括通信单元1105,用以实现信号的输入(接收)和输出(发送)。The processor 1101 can be used to control the electronic device 1100, execute software programs, and process data of the software programs. The electronic device 1100 may also include a communication unit 1105 to implement input (reception) and output (transmission) of signals.
例如,电子设备1100可以是芯片,通信单元1105可以是该芯片的输入和/或输出电路,或者,通信单元1105可以是该芯片的通信接口,该芯片可以作为终端设备或其它电子设备的组成部分。For example, the electronic device 1100 may be a chip, and the communication unit 1105 may be an input and/or output circuit of the chip, or the communication unit 1105 may be a communication interface of the chip, and the chip may be used as a component of a terminal device or other electronic device. .
又例如,电子设备1100可以是终端设备,通信单元1105可以是该终端设备的收发器,或者,通信单元1105可以是该终端设备的收发电路。For another example, the electronic device 1100 may be a terminal device, and the communication unit 1105 may be a transceiver of the terminal device, or the communication unit 1105 may be a transceiver circuit of the terminal device.
电子设备1100中可以包括一个或多个存储器1102,其上存有程序1104,程序1104可被处理器1101运行,生成指令1103,使得处理器1101根据指令1103执行上述方法实施例中描述的图像处理方法。The electronic device 1100 may include one or more memories 1102 on which a program 1104 is stored. The program 1104 may be run by the processor 1101 to generate instructions 1103, so that the processor 1101 performs the image processing described in the above method embodiment according to the instructions 1103. method.
可选地,存储器11002中还可以存储有数据。Optionally, data may also be stored in the memory 11002.
可选地,处理器1101还可以读取存储器1102中存储的数据,该数据可以与程序1104存储在相同的存储地址,该数据也可以与程序1104存储在不同的存储地址。Optionally, the processor 1101 can also read the data stored in the memory 1102. The data can be stored at the same storage address as the program 1104, or the data can be stored at a different storage address as the program 1104.
处理器1101和存储器1102可以单独设置,也可以集成在一起,例如,集成在终端设备的系统级芯片(system on chip,SOC)上。The processor 1101 and the memory 1102 can be provided separately or integrated together, for example, integrated on a system on chip (SOC) of the terminal device.
示例性地,存储器1102可以用于存储本申请实施例中提供的图像处理方法的相关程序1104,处理器1101可以用于在执行图像处理时调用存储器1102中存储的图像处理方法的相关程序1104,执行本申请实施例的图像处理方法;例如,启动电子设备中的相机应用程序;显示第一预览图像,第一预览图像对应的变焦倍率为第一倍率,第一预览图像的中心点为第一中心点;确定第一预览图像中的第二中心点,第二中心点为目标区域的中心点,第一中心点与第二中心点不重合;检测到第一操作,第一操作指示电子设备的变焦倍率为第二倍率;响应于第一操作,显示第二预览图像,第二预览图像的中心点为第二中心点。For example, the memory 1102 can be used to store the related programs 1104 of the image processing method provided in the embodiment of the present application, and the processor 1101 can be used to call the related programs 1104 of the image processing method stored in the memory 1102 when performing image processing. Execute the image processing method of the embodiment of the present application; for example, start the camera application in the electronic device; display the first preview image, the zoom magnification corresponding to the first preview image is the first magnification, and the center point of the first preview image is the first Center point; determine the second center point in the first preview image, the second center point is the center point of the target area, the first center point and the second center point do not coincide; the first operation is detected, and the first operation indicates the electronic device The zoom magnification is the second magnification; in response to the first operation, a second preview image is displayed, and the center point of the second preview image is the second center point.
本申请还提供了一种计算机程序产品,该计算机程序产品被处理器1101执行时实现本申请中任一方法实施例的图像处理方法。This application also provides a computer program product, which when executed by the processor 1101 implements the image processing method of any method embodiment in this application.
该计算机程序产品可以存储在存储器1102中,例如是程序1104,程序1104经过预处理、编译、汇编和链接等处理过程最终被转换为能够被处理器1101执行的可执行 目标文件。The computer program product may be stored in the memory 1102, such as a program 1104. The program 1104 is finally converted into an executable object file that can be executed by the processor 1101 through processes such as preprocessing, compilation, assembly and linking.
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被计算机执行时实现本申请中任一方法实施例所述的图像处理方法。该计算机程序可以是高级语言程序,也可以是可执行目标程序。This application also provides a computer-readable storage medium on which a computer program is stored. When the computer program is executed by a computer, the image processing method described in any method embodiment of this application is implemented. The computer program may be a high-level language program or an executable object program.
该计算机可读存储介质例如是存储器1102。存储器1102可以是易失性存储器或非易失性存储器,或者,存储器1102可以同时包括易失性存储器和非易失性存储器。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。The computer-readable storage medium is, for example, memory 1102. Memory 1102 may be volatile memory or nonvolatile memory, or memory 1102 may include both volatile memory and nonvolatile memory. Among them, non-volatile memory can be read-only memory (ROM), programmable ROM (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically removable memory. Erase electrically programmable read-only memory (EPROM, EEPROM) or flash memory. Volatile memory can be random access memory (RAM), which is used as an external cache. By way of illustration, but not limitation, many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (synchlink DRAM, SLDRAM) ) and direct memory bus random access memory (direct rambus RAM, DR RAM).
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art will appreciate that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented with electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and simplicity of description, the specific working processes of the systems, devices and units described above can be referred to the corresponding processes in the foregoing method embodiments, and will not be described again here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的电子设备的实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the embodiments of the electronic equipment described above are only illustrative. For example, the division of the modules is only a logical function division. In actual implementation, there may be other division methods, for example, multiple units or components may be The combination can either be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施例的实施过程构成任何限定。It should be understood that in the various embodiments of the present application, the size of the sequence numbers of each process does not mean the order of execution. The execution order of each process should be determined by its functions and internal logic, and should not be used in the embodiments of the present application. The implementation process constitutes any limitation.
另外,本文中的术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在 B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。In addition, the term "and/or" in this article is only an association relationship describing related objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, and A and B exist simultaneously. , there are three situations of B alone. In addition, the character "/" in this article generally indicates that the related objects are an "or" relationship.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product. The computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code. .
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准总之,以上所述仅为本申请技术方案的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited thereto. Any person familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the present application. should be covered by the protection scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims. In short, the above descriptions are only preferred embodiments of the technical solution of the present application and are not intended to limit the protection scope of the present application. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of this application shall be included in the protection scope of this application.

Claims (21)

  1. 一种图像处理方法,其特征在于,所述图像处理方法应用于电子设备,包括:An image processing method, characterized in that the image processing method is applied to electronic equipment, including:
    启动所述电子设备中的相机应用程序;Launch the camera application in the electronic device;
    显示第一预览图像,所述第一预览图像对应的变焦倍率为第一倍率,所述第一预览图像的中心点为第一中心点;Display a first preview image, the zoom magnification corresponding to the first preview image is the first magnification, and the center point of the first preview image is the first center point;
    确定所述第一预览图像中的第二中心点,所述第二中心点为目标区域的中心点,所述第一中心点与所述第二中心点不重合;Determine a second center point in the first preview image, the second center point is the center point of the target area, and the first center point does not coincide with the second center point;
    检测到第一操作,所述第一操作指示所述电子设备的变焦倍率为第二倍率;Detecting a first operation indicating that the zoom magnification of the electronic device is a second magnification;
    响应于所述第一操作,显示第二预览图像,所述第二预览图像的中心点为所述第二中心点。In response to the first operation, a second preview image is displayed, and the center point of the second preview image is the second center point.
  2. 如权利要求1所述的图像处理方法,其特征在于,所述第二预览图像与所述目标区域重合。The image processing method of claim 1, wherein the second preview image coincides with the target area.
  3. 如权利要求2所述的图像处理方法,其特征在于,所述第二预览图像包括所述目标区域。The image processing method of claim 2, wherein the second preview image includes the target area.
  4. 如权利要求2所述的图像处理方法,其特征在于,所述第二预览图像包括所述目标区域的一部分。The image processing method of claim 2, wherein the second preview image includes a part of the target area.
  5. 如权利要求1至4中任一项所述的图像处理方法,其特征在于,在显示所述第一预览图像与所述第二预览图像时,所述电子设备所处的位置相同。The image processing method according to any one of claims 1 to 4, wherein when the first preview image and the second preview image are displayed, the position of the electronic device is the same.
  6. 如权利要求1至5中任一项所述的图像处理方法,其特征在于,还包括:The image processing method according to any one of claims 1 to 5, further comprising:
    检测到第二操作,所述第二操作指示所述电子设备的变焦倍率为第三倍率;A second operation is detected, the second operation indicating that the zoom magnification of the electronic device is a third magnification;
    响应于所述第二操作,显示第三预览图像,所述第三预览图像的中心点为第三中心点,所述第三中心点在所述第一中心点与所述第二中心点的连线上。In response to the second operation, a third preview image is displayed, the center point of the third preview image is a third center point, and the third center point is between the first center point and the second center point. Online.
  7. 如权利要求1至5中任一项所述的图像处理方法,其特征在于,所述第一中心点与所述第二中心点的连线上包括N个中心点,所述N个中心点中的每个中心点与至少一个变焦倍率对应,N为大于或者等于2的整数。The image processing method according to any one of claims 1 to 5, characterized in that the connection between the first center point and the second center point includes N center points, and the N center points Each center point in corresponds to at least one zoom factor, and N is an integer greater than or equal to 2.
  8. 如权利要求7所述的图像处理方法,其特征在于,还包括:The image processing method according to claim 7, further comprising:
    对所述第一中心点与所述第二中心点的连线进行等分,得到所述N个中心点。The line connecting the first center point and the second center point is equally divided to obtain the N center points.
  9. 如权利要求7所述的图像处理方法,其特征在于,还包括:The image processing method according to claim 7, further comprising:
    根据插值算法对所述第一中心点与所述第二中心点的连线进行分割,得到所述N个中心点。The line connecting the first center point and the second center point is divided according to an interpolation algorithm to obtain the N center points.
  10. 如权利要求1至9中的任一项所述的图像处理方法,其特征在于,所述响应于所述第一操作,显示第二预览图像,包括:The image processing method according to any one of claims 1 to 9, wherein displaying a second preview image in response to the first operation includes:
    若所述目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于第一预设阈值,采用第一像素合并方式显示所述第二预览图像;If the ratio between the area of the target area and the area of the first preview image is less than or equal to the first preset threshold, display the second preview image using a first pixel merging method;
    若所述目标区域的面积与所述第一预览图像的面积之间的比值大于第一预设阈值,采用第二像素合并方式显示所述第二预览图像。If the ratio between the area of the target area and the area of the first preview image is greater than the first preset threshold, the second preview image is displayed using a second pixel combining method.
  11. 如权利要求10所述的图像处理方法,其特征在于,所述采用第一像素合并方式显示所述第二预览图像,包括:The image processing method according to claim 10, wherein the display of the second preview image using a first pixel merging method includes:
    采用所述第二倍率对应的裁切区域对所述第一预览图像进行裁切处理,得到第一图像区域,所述第一图像区域中包括M个像素;Using the cutting area corresponding to the second magnification to perform cutting processing on the first preview image, a first image area is obtained, and the first image area includes M pixels;
    对所述M个像素进行重新排列处理,得到K个像素,M、K均为正整数,K大于M;The M pixels are rearranged to obtain K pixels, M and K are both positive integers, and K is greater than M;
    基于所述K个像素显示所述第二预览图像。The second preview image is displayed based on the K pixels.
  12. 如权利要求10所述的图像处理方法,其特征在于,所述采用第一像素合并方式显示所述第二预览图像,包括:The image processing method according to claim 10, wherein the display of the second preview image using a first pixel merging method includes:
    采用所述第二倍率对应的裁切区域对所述第一预览图像进行裁切处理,得到第一图像区域,所述第一图像区域中包括M个像素;Using the cutting area corresponding to the second magnification to perform cutting processing on the first preview image, a first image area is obtained, and the first image area includes M pixels;
    对所述M个像素进行合并处理,得到H个像素,M、H均为正整数,H小于M;The M pixels are merged to obtain H pixels, M and H are both positive integers, and H is less than M;
    基于所述H个像素显示第二预览图像。A second preview image is displayed based on the H pixels.
  13. 如权利要求1至12中的任一项所述的图像处理方法,其特征在于,所述确定所述第一预览图像中的第二中心点,包括:The image processing method according to any one of claims 1 to 12, wherein determining the second center point in the first preview image includes:
    检测到用户对所述第一预览图像的点击操作,所述第二中心点为所述用户与所述电子设备的触碰点。The user's click operation on the first preview image is detected, and the second center point is the touch point between the user and the electronic device.
  14. 如权利要求1至12中的任一项所述的图像处理方法,其特征在于,所述确定所述第一预览图像中的第二中心点,包括:The image processing method according to any one of claims 1 to 12, wherein determining the second center point in the first preview image includes:
    检测到所述第一预览图像中的第一主体,所述第二中心点为所述第一主体的中心点。The first subject in the first preview image is detected, and the second center point is the center point of the first subject.
  15. 如权利要求1至14中任一项所述的图像处理方法,其特征在于,所述第二倍率是基于所述目标区域的面积与所述第一预览图像的面积之间的比值确定的。The image processing method according to any one of claims 1 to 14, wherein the second magnification is determined based on a ratio between an area of the target area and an area of the first preview image.
  16. 如权利要求15所述的图像处理方法,其特征在于,若所述目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于1/4,所述第二倍率为2倍倍率。The image processing method according to claim 15, characterized in that if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/4, the second magnification is 2 times magnification.
  17. 如权利要求15所述的图像处理方法,其特征在于,若所述目标区域的面积与所述第一预览图像的面积之间的比值小于或者等于1/9,所述第二倍率为3倍倍率。The image processing method of claim 15, wherein if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/9, the second magnification is 3 times magnification.
  18. 如权利要求15所述的图像处理方法,其特征在于,若所述目标区域的面积与所述第一预览图像的面积的之间的比值小于或者等于1/16,所述第二倍率为4倍倍率。The image processing method of claim 15, wherein if the ratio between the area of the target area and the area of the first preview image is less than or equal to 1/16, the second magnification is 4 magnification.
  19. 一种电子设备,其特征在于,所述电子设备包括处理器和存储器,所述存储器用于存储计算机程序,所述处理器用于从所述存储器中调用并运行所述计算机程序,使得所述电子设备执行如权利要求1至18中任一项所述的图像处理方法。An electronic device, characterized in that the electronic device includes a processor and a memory, the memory is used to store a computer program, the processor is used to call and run the computer program from the memory, so that the electronic device The device executes the image processing method according to any one of claims 1 to 18.
  20. 一种芯片,其特征在于,包括处理器,当所述处理器执行指令时,所述处理器执行如权利要求1至18中任一项所述的图像处理方法。A chip, characterized in that it includes a processor. When the processor executes instructions, the processor executes the image processing method according to any one of claims 1 to 18.
  21. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储了计算机程序,当所述计算机程序被处理器执行时,使得处理器执行如权利要求1至18中任一项所述的图像处理方法。A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program. When the computer program is executed by a processor, the processor causes the processor to execute the method as claimed in any one of claims 1 to 18. The image processing method described above.
PCT/CN2022/140810 2022-03-29 2022-12-21 Image processing method and electronic device WO2023185127A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210318644.XA CN116939363B (en) 2022-03-29 Image processing method and electronic equipment
CN202210318644.X 2022-03-29

Publications (1)

Publication Number Publication Date
WO2023185127A1 true WO2023185127A1 (en) 2023-10-05

Family

ID=88198915

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/140810 WO2023185127A1 (en) 2022-03-29 2022-12-21 Image processing method and electronic device

Country Status (1)

Country Link
WO (1) WO2023185127A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018018930A1 (en) * 2016-07-29 2018-02-01 广东欧珀移动通信有限公司 Image zooming processing method, device, and terminal apparatus
CN109286750A (en) * 2018-09-21 2019-01-29 重庆传音科技有限公司 A kind of Zooming method and a kind of intelligent terminal based on intelligent terminal
CN111970439A (en) * 2020-08-10 2020-11-20 Oppo(重庆)智能科技有限公司 Image processing method and device, terminal and readable storage medium
CN112532875A (en) * 2020-11-24 2021-03-19 展讯通信(上海)有限公司 Terminal device, image processing method and device thereof, and storage medium
CN112825543A (en) * 2019-11-20 2021-05-21 华为技术有限公司 Shooting method and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018018930A1 (en) * 2016-07-29 2018-02-01 广东欧珀移动通信有限公司 Image zooming processing method, device, and terminal apparatus
CN109286750A (en) * 2018-09-21 2019-01-29 重庆传音科技有限公司 A kind of Zooming method and a kind of intelligent terminal based on intelligent terminal
CN112825543A (en) * 2019-11-20 2021-05-21 华为技术有限公司 Shooting method and equipment
CN111970439A (en) * 2020-08-10 2020-11-20 Oppo(重庆)智能科技有限公司 Image processing method and device, terminal and readable storage medium
CN112532875A (en) * 2020-11-24 2021-03-19 展讯通信(上海)有限公司 Terminal device, image processing method and device thereof, and storage medium

Also Published As

Publication number Publication date
CN116939363A (en) 2023-10-24

Similar Documents

Publication Publication Date Title
US9860448B2 (en) Method and electronic device for stabilizing video
CN111212235B (en) Long-focus shooting method and electronic equipment
WO2022262344A1 (en) Photographing method and electronic device
WO2021185374A1 (en) Image capturing method and electronic device
CN116711316A (en) Electronic device and operation method thereof
WO2023142830A1 (en) Camera switching method, and electronic device
EP4325877A1 (en) Photographing method and related device
JP7383911B2 (en) Imaging system, image processing device, imaging device and program
WO2023060921A1 (en) Image processing method and electronic device
CN115767290B (en) Image processing method and electronic device
WO2023185127A1 (en) Image processing method and electronic device
WO2023005355A1 (en) Image anti-shake method and electronic device
CN116939363B (en) Image processing method and electronic equipment
CN113709355B (en) Sliding zoom shooting method and electronic equipment
CN116128739A (en) Training method of downsampling model, image processing method and device
CN114531539B (en) Shooting method and electronic equipment
WO2023035868A1 (en) Photographing method and electronic device
CN115767287B (en) Image processing method and electronic equipment
CN114979458B (en) Image shooting method and electronic equipment
EP4262226A1 (en) Photographing method and related device
EP4228236A1 (en) Image processing method and electronic device
CN117135459A (en) Image anti-shake method and electronic equipment
CN116668837A (en) Method for displaying thumbnail images and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22934942

Country of ref document: EP

Kind code of ref document: A1