WO2021179186A1 - Procédé et appareil de mise au point, et dispositif électronique - Google Patents
Procédé et appareil de mise au point, et dispositif électronique Download PDFInfo
- Publication number
- WO2021179186A1 WO2021179186A1 PCT/CN2020/078677 CN2020078677W WO2021179186A1 WO 2021179186 A1 WO2021179186 A1 WO 2021179186A1 CN 2020078677 W CN2020078677 W CN 2020078677W WO 2021179186 A1 WO2021179186 A1 WO 2021179186A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- image
- areas
- unmarked
- regions
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
Definitions
- the embodiments of the present application relate to the field of image processing, and in particular, to a focusing method, device, and electronic equipment.
- an auto focus system can be set in the electronic device so that the electronic device can automatically adjust the focus to achieve clear shooting.
- the electronic device may generate a preview image, and set a rectangular area in the preview image as a focus window, and adjust the focus to improve the image clarity in the focus window. This realizes auto focus.
- this focusing method may have some problems.
- the focus window will include other foreground or background images.
- the focus window is usually fixed in the center area of the preview image. Then, when the shooting target is not all in the center of the field of view, part of the image corresponding to the preview image may be outside the focus window. At this time, the electronic device can not accurately shoot according to the focus window. When the target is focused, it is impossible to take a clear shot of the target.
- the embodiments of the present application provide a focusing method, device, and electronic equipment, which can improve the accuracy of focusing on a shooting target.
- a focusing method includes: generating a first image, the first image including an image of a photographing target. Determine the target location in this first image.
- the first image is divided into P background regions, target regions and Q unmarked regions.
- the target area includes the target position
- the Q unmarked areas are Q areas in the first image excluding the P background areas and the target area
- P and Q are both positive integers.
- the P background regions one or more regions of the Q unmarked regions are merged into at least one background region of the P background regions.
- At least one unmarked area excluding the one or more areas among the Q unmarked areas is merged into the target area to obtain a region of interest ROI that includes the shooting target.
- the in-focus area is determined according to the ROI, and the in-focus area includes at least a part of the ROI.
- the electronic device can segment and merge the first image, merge the unmarked area similar to the image displayed in the background area into the background area, and merge the remaining areas (such as those that are not merged into the background area).
- the unmarked area and the target area) are combined as an ROI, thereby accurately determining the contour of the shooting target in the first image.
- the focus window is determined and the focus is performed to achieve accurate focus on the shooting target, and then achieve clear shooting of the shooting target, so as to improve the image quality of the captured image and the filming rate.
- any one of the P background areas is in contact with the edge of the first image. Based on this solution, the electronic device can clearly determine the background area included in the first image.
- determining the target position in the first image includes: determining a preset position in the first image as the target position.
- the preset position may be the center position of the first image.
- determining the target location in the first image includes: determining a location in the first image corresponding to the instruction as the target location according to a user's instruction. Based on this solution, the electronic device can determine the target location according to the user's instructions, such as operations such as touching the screen. It is understandable that the user can see the first image on the screen of the electronic device when starting to shoot. The user can touch the position corresponding to the first image to indicate the position of the photographing target of the electronic device in the first image. Therefore, the electronic device uses the location indicated by the user as the target location to accurately determine the location corresponding to the shooting target.
- the first image is divided into P background areas according to the target position.
- the target area and the Q unmarked areas include: superpixel segmentation of the first image to obtain M areas, where M is An integer greater than or equal to 3; obtain the target area in M areas according to the target position; obtain the P background areas in the M areas; exclude the P background areas and the target area from the M areas
- the outside Q areas are determined as Q unmarked areas.
- the electronic device can divide the first image into regions with different characteristics, and mark each region at the same time, so as to accurately determine the range of the background region.
- the merging one or more of the Q unmarked areas into at least one background area of the P background areas according to the P background areas includes executing at least One merging operation is as follows: for each of the P background areas, determine whether there is a first area, where the first area is an unmarked area and is adjacent to each background area, and is in contact with all the background areas. At least one first adjacent area adjacent to each background area has the highest feature similarity between the first area and each background area; when the first area exists, the first areas are merged To each of the background areas. Based on this solution, the electronic device can merge multiple fragmented areas into a small number of areas with similar characteristics. For example, unmarked areas with high similarity to the background area are merged into the background area. In this way, the range of the background area can be accurately expanded.
- the method further includes: for each of the one or more unmarked areas other than the first area in the Q unmarked areas Unmarked area, determine whether there is a second area, the second area is an unmarked area and is adjacent to each unmarked area, and in at least one second phase adjacent to each unmarked area The feature similarity between the second area and each unmarked area in the adjacent areas is the highest; when the second area exists, the second area is merged into each unmarked area.
- the electronic device can merge the unmarked areas that have not been merged into the background area with each other, and the features of the merged unmarked area are updated, so that the electronic device can add more unmarked areas based on the new features.
- the marked area is merged into the background area.
- the focus area is a rectangular focus window
- the rectangular focus window includes four sides
- each of the four sides has a number of pixels that coincide with the ROI less than Preset threshold.
- the electronic device can accurately determine the position and size of the focus window according to the ROI, so as to ensure accurate focus on the shooting target.
- this solution can effectively avoid the problem of too large focus window caused when the contour of the shooting target has a protruding position, thereby ensuring accurate focus on the subject of the shooting target, and reducing the number of electronic devices The amount of calculation when focusing.
- a focusing device which includes a generating unit, a determining unit, an acquiring unit, and a combining unit.
- the generating unit is configured to generate a first image
- the first image includes an image of the shooting target.
- the determining unit is used to determine the target position in the first image.
- the acquiring unit is configured to divide the first image into P background regions, target regions, and Q unmarked regions according to the target position. Wherein, the target area includes the target position, the Q unmarked areas are Q areas in the first image excluding the P background areas and the target area, and P and Q are both positive integers.
- the merging unit is configured to merge one or more of the Q unmarked areas into at least one background area of the P background areas according to the P background areas.
- the merging unit is also used for merging at least one unmarked area of the Q unmarked areas excluding the one or more areas into the target area to obtain a region of interest (ROI) including the shooting target.
- the determining unit is further configured to determine a focus area according to the ROI, and the focus area includes at least part of the ROI.
- any one of the P background areas is in contact with the edge of the first image.
- the determining unit is configured to determine a preset position in the first image as the target position.
- the determining unit is configured to determine a location in the first image corresponding to the instruction as the target location according to an instruction of the user.
- the acquisition unit is configured to perform super pixel segmentation on the first image to obtain M regions, where M is an integer greater than or equal to 3.
- the acquiring unit is further configured to acquire the target area in the M areas according to the target position.
- the obtaining unit is also used to obtain the P background areas in the M areas.
- the acquiring unit is further configured to determine the Q areas of the M areas except the P background areas and the target area as the Q unmarked areas.
- the merging unit is configured to perform at least one merging operation for at least one background area among the P background areas: determining whether there is a first area, which is an unmarked area and Adjacent to each background area, and among at least one first adjacent area adjacent to each background area, the first area and each background area have the highest feature similarity. When the first area exists, the first area is merged into each background area.
- the merging unit is also used to, each time a merging operation is performed, for each of the Q unmarked areas except for the first area, one or more unmarked areas. Marked area, to determine whether there is a second area, the second area is an unmarked area and adjacent to each unmarked area, and in at least one second adjacent area adjacent to each unmarked area The second area has the highest feature similarity with each unmarked area. When the second area exists, the second area is merged into each unmarked area.
- the focus area is a rectangular focus window
- the rectangular focus window includes four sides, and each of the four sides has a number of pixels that coincide with the ROI is less than a preset threshold.
- an electronic device in a third aspect, includes one or more processors and one or more memories.
- the memory is coupled with the processor, and the memory stores computer instructions.
- the processor executes the computer instruction, the electronic device is caused to execute the focusing method described in any one of the first aspect and its possible designs.
- a computer-readable storage medium includes computer instructions, and when the computer instructions are executed, the focusing method described in any one of the first aspect and its possible designs is executed.
- a chip system in a fifth aspect, includes a processing circuit and an interface.
- the processing circuit is used to call and run the computer program stored in the storage medium from the storage medium to execute the focusing method described in any one of the above-mentioned first aspect and its possible designs.
- any one of the above-mentioned second aspect to the fifth aspect and its possible design method can correspond to the above-mentioned first aspect and any one of its possible designs, so similar technologies can be brought about The effect will not be repeated here.
- FIG. 1 is a schematic diagram of a focus window of a central fixed area provided by the prior art
- FIG. 2 is a schematic structural diagram of an electronic device 100 provided by an embodiment of the application.
- FIG. 3 is a schematic diagram of the connection between a camera and a processor provided by an embodiment of the application;
- FIG. 4 is a schematic flowchart of a focusing method provided by an embodiment of the application.
- FIG. 5 is a schematic diagram of a super pixel segmentation provided by an embodiment of the application.
- FIG. 6 is a schematic diagram of a region marking method provided by an embodiment of the application.
- FIG. 7 is a flowchart of a region merging method provided by an embodiment of the application.
- FIG. 8 is a schematic diagram of a process of a region merging method provided by an embodiment of this application.
- FIG. 9 is a schematic diagram of the result of a region merging method provided by an embodiment of this application.
- FIG. 10 is a schematic diagram of a method for determining a focus window according to an ROI according to an embodiment of the application.
- FIG. 11 is a schematic diagram of a result of determining a focus window according to an ROI according to an embodiment of the application.
- FIG. 12 is a schematic flowchart of another focusing method provided by an embodiment of this application.
- FIG. 13 is a schematic diagram of a focusing effect provided by an embodiment of this application.
- FIG. 14 is a schematic diagram of yet another focusing effect provided by an embodiment of this application.
- FIG. 15 is a schematic diagram of yet another focusing effect provided by an embodiment of the application.
- FIG. 16 is a schematic diagram of yet another focusing effect provided by an embodiment of this application.
- FIG. 17 is a schematic diagram of yet another focusing effect provided by an embodiment of the application.
- FIG. 18 is a schematic diagram of yet another focusing effect provided by an embodiment of this application.
- FIG. 19 is a schematic flowchart of an image tracking method provided by an embodiment of this application.
- FIG. 20 is a schematic diagram of the composition of a focusing device provided by an embodiment of the application.
- FIG. 21 is a schematic diagram of the composition of an electronic device provided by an embodiment of the application.
- FIG. 22 is a schematic diagram of the composition of a chip system provided by an embodiment of the application.
- the selection of the focus window largely determines the final image quality and the filming rate.
- the image detail information in the focus window should be richer so that the electronic device can perform the focus evaluation function Sharpness quality evaluation to determine whether the image in the focus window has been in focus.
- the focus window should be chosen as close as possible to the center of the image to avoid the effects of scene drift caused by changes in the lens position during the autofocus process. Try to ensure that the scene and lighting conditions in the focus window remain unchanged during the focusing process to ensure the stability of the auto focus.
- the focus evaluation function is more sensitive to information changes in the focus window, for example, the target contrast or dynamic range in the focus window is relatively large.
- the electronic device is a smart phone as an example.
- the most commonly used focus window option is the center fixed area focus window.
- This solution uses a fixed-size rectangle set at the center of the preview image as the focus window. It is understandable that when shooting with a smart phone, the main part of the object of interest (ie, the shooting target) is usually located in the center of the field of view. Therefore, the smart phone uses the central fixed area focus window to focus, so that the image in the focus window, that is, the central area of the preview image, is adjusted to the in-focus state, and it is considered that the imaging effect of the entire image can be guaranteed to be a film.
- FIG. 1 shows a schematic diagram of a focus window with a fixed central area.
- a rectangle with a fixed size can be set as the focus window at the center of the preview image corresponding to the full field of view.
- the electronic device can adjust the expansion and contraction of the lens, so that the image in the focusing window is adjusted to the in-focus state, thereby realizing automatic focusing.
- the electronic device can evaluate the sharpness of the image in the focus window in units of pixels by using the focus evaluation function to determine whether the image in the focus window is adjusted to an in-focus state.
- the size of the focus window can be set to one-fifth of the captured image corresponding to the full field of view.
- the size of the focus window can also be determined according to the preview ratio of the field of view.
- this method still has some problems. For example, due to the fixed size of the focus window, if the shooting target is smaller than the focus window, especially if the shooting target is in the foreground and the background texture information is rich, it is very easy to focus on the background when using this method to focus, resulting in The focus accuracy of the camera is not accurate enough, so that the image obtained by shooting cannot clearly image the shooting target.
- embodiments of the present application provide a focusing method, device, and electronic device, so that the electronic device can determine the region of interest corresponding to the shooting target according to the actual size and position of the shooting target in the preview image. ROI), and then determine the size and position of the focus window according to the ROI, and focus.
- Using the focusing method provided by the embodiment of the present application can improve the accuracy of focusing on the shooting target. In particular, it can effectively improve the focus accuracy of shooting targets of different sizes in a complex environment. It realizes the clear shooting of the shooting target, and improves the image quality of the shooting image and the filming rate of shooting.
- the electronic device in the embodiments of the present application may be a mobile phone, a tablet computer, a desktop computer, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, and a cellular phone.
- FIG. 2 is a schematic structural diagram of an electronic device 100 provided by an embodiment of this application.
- the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, and a battery 142 , Antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193 , Display screen 194, subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
- SIM subscriber identification module
- the sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
- the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 100.
- the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
- the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
- the processor 110 may include one or more processing units.
- the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
- AP application processor
- modem processor modem processor
- GPU graphics processing unit
- image signal processor image signal processor
- ISP image signal processor
- controller memory
- video codec digital signal processor
- DSP digital signal processor
- NPU neural-network processing unit
- the different processing units may be independent devices or integrated in one or more processors.
- the controller may be the nerve center and command center of the electronic device 100.
- the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
- a memory may also be provided in the processor 110 to store instructions and data.
- the memory in the processor 110 is a cache memory.
- the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
- the processor 110 may include one or more interfaces.
- the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter/receiver (universal asynchronous) interface.
- I2C integrated circuit
- I2S integrated circuit built-in audio
- PCM pulse code modulation
- PCM pulse code modulation
- UART universal asynchronous transmitter/receiver
- MIPI mobile industry processor interface
- GPIO general-purpose input/output
- SIM subscriber identity module
- USB Universal Serial Bus
- the processor 110 may be configured to perform super pixel division on multiple pixels included in the preview image according to the preview image.
- the processor 110 may also be used to merge multiple divided regions according to preset rules, for example, merge multiple divided regions according to a merging rule (Rule_merging) to obtain the ROI closest to the shooting target, and Determine the corresponding focus window according to the ROI.
- Rule_merging merge multiple divided regions according to a merging rule
- the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
- the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
- ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
- the ISP may be provided in the camera 193.
- the camera 193 is used to capture still images or videos.
- the object generates an optical image through the lens and is projected to the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
- ISP outputs digital image signals to DSP for processing.
- DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
- the electronic device 100 may include one or N cameras 193, and N is a positive integer greater than one.
- the camera 193 may also realize focusing during shooting under the control of the processor 110.
- the camera 193 may include a lens 193A capable of auto-focusing.
- the lens 193A can automatically adjust the mutual position of the lenses in the lens 193A under the control of the processor 110 according to the position of the focus window in the preview image, so that the focus can be adjusted to the sharpness of the shooting target included in the focus window. A good position to achieve focus on the shooting target.
- the above description is based on an example in which the lens 193A is included in the camera 193.
- the lens 193A in the electronic device 100 may not be included in the camera 193, or the electronic device 100 may not include the camera 193.
- Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
- Video codecs are used to compress or decompress digital video.
- the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
- MPEG moving picture experts group
- MPEG2 MPEG2, MPEG3, MPEG4, and so on.
- NPU is a neural-network (NN) computing processor.
- NN neural-network
- applications such as intelligent cognition of the electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
- the charging management module 140 is used to receive charging input from the charger.
- the charger can be a wireless charger or a wired charger.
- the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
- the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device 100 through the power management module 141.
- the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
- the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
- the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
- the power management module 141 may also be provided in the processor 110.
- the power management module 141 and the charging management module 140 may also be provided in the same device.
- the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
- the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
- Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
- the antenna can be used in conjunction with a tuning switch.
- the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
- the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
- the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
- the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
- at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
- at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
- the modem processor may include a modulator and a demodulator.
- the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
- the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
- the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
- the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
- the modem processor may be an independent device.
- the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
- the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
- WLAN wireless local area networks
- BT wireless fidelity
- GNSS global navigation satellite system
- FM frequency modulation
- NFC near field communication technology
- infrared technology infrared, IR
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
- the wireless communication module 160 may also receive a signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
- the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
- the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
- the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
- GPS global positioning system
- GLONASS global navigation satellite system
- BDS Beidou navigation satellite system
- QZSS quasi-zenith satellite system
- SBAS satellite-based augmentation systems
- the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
- the GPU is an image processing microprocessor, which is connected to the display screen 194 and the application processor.
- the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
- the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
- the display screen 194 is used to display images, videos, and the like.
- the display screen 194 includes a display panel.
- the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
- LCD liquid crystal display
- OLED organic light-emitting diode
- active-matrix organic light-emitting diode active-matrix organic light-emitting diode
- emitting diode AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
- the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
- the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
- the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
- the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
- the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121.
- the internal memory 121 may include a storage program area and a storage data area.
- the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
- the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
- the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
- UFS universal flash storage
- the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
- the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
- the audio module 170 can also be used to encode and decode audio signals.
- the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
- the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
- the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
- the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
- the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
- the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
- the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal into the microphone 170C.
- the electronic device 100 may be provided with at least one microphone 170C.
- the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals.
- the electronic device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
- the earphone interface 170D is used to connect wired earphones.
- the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, and a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface .
- OMTP open mobile terminal platform
- CTIA cellular telecommunications industry association of the USA, CTIA
- the touch sensor may be provided on the display screen 194, and the touch screen is composed of the touch sensor and the display screen 194, which is also called a “touch screen”.
- the touch sensor is used to detect touch operations acting on or near it.
- the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
- the touch sensor may collect the position of the touch operation input by the user when using the electronic device 100 to take a picture, and transmit the position to the processor 110 so that the processor 110 can determine the target position corresponding to the photographing target.
- visual output related to touch operations may be provided through the display screen 194.
- the touch sensor may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
- the pressure sensor is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
- the pressure sensor may be provided on the display screen 194.
- the capacitive pressure sensor may include at least two parallel plates with conductive materials. When a force is applied to the pressure sensor, the capacitance between the electrodes changes.
- the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
- the electronic device 100 detects the intensity of the touch operation according to the pressure sensor.
- the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor.
- touch operations that act on the same touch position but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
- the gyroscope sensor can be used to determine the movement posture of the electronic device 100.
- the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
- the gyroscope sensor can be used for shooting anti-shake.
- the gyroscope sensor detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
- the gyroscope sensor can also be used for navigation and somatosensory game scenes.
- the air pressure sensor is used to measure air pressure.
- the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor to assist positioning and navigation.
- the magnetic sensor includes a Hall sensor.
- the electronic device 100 may use a magnetic sensor to detect the opening and closing of the flip holster.
- the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor.
- features such as automatic unlocking of the flip cover are set.
- the acceleration sensor can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three-axis). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device 100, applied to applications such as horizontal and vertical screen switching, and pedometer.
- the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use a distance sensor to measure the distance to achieve fast focusing.
- the proximity light sensor may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
- the light emitting diode may be an infrared light emitting diode.
- the electronic device 100 emits infrared light to the outside through the light emitting diode.
- the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
- the electronic device 100 can use the proximity light sensor to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
- the proximity light sensor can also be used in leather case mode, and pocket mode automatically unlocks and locks the screen.
- the ambient light sensor is used to sense the brightness of the ambient light.
- the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
- the ambient light sensor can also be used to automatically adjust the white balance when taking pictures.
- the ambient light sensor can also cooperate with the proximity light sensor to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
- the fingerprint sensor is used to collect fingerprints.
- the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
- the temperature sensor is used to detect temperature.
- the electronic device 100 uses the temperature detected by the temperature sensor to execute the temperature processing strategy. For example, when the temperature reported by the temperature sensor exceeds a threshold value, the electronic device 100 executes to reduce the performance of the processor located near the temperature sensor, so as to reduce power consumption and implement thermal protection.
- the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
- the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
- Bone conduction sensors can acquire vibration signals.
- the bone conduction sensor can obtain the vibration signal of the vibrating bone mass of the human voice.
- the bone conduction sensor can also contact the human pulse and receive the blood pressure pulse signal.
- the bone conduction sensor may also be provided in the earphone, combined with the bone conduction earphone.
- the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor to realize the voice function.
- the application processor can parse the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor to realize the heart rate detection function.
- the button 190 includes a power-on button, a volume button, and so on.
- the button 190 may be a mechanical button. It can also be a touch button.
- the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
- the motor 191 can generate vibration prompts.
- the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
- touch operations applied to different applications can correspond to different vibration feedback effects.
- Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
- Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
- the touch vibration feedback effect can also support customization.
- the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
- the SIM card interface 195 is used to connect to the SIM card.
- the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
- the electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
- the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
- the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
- the SIM card interface 195 can also be compatible with different types of SIM cards.
- the SIM card interface 195 may also be compatible with external memory cards.
- the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
- the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
- the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
- FIG. 4 is a schematic flowchart of a focusing method provided by an embodiment of this application. As shown in Figure 4, the method may include S401-S407.
- the electronic device generates a first image, where the first image includes an image of a shooting target.
- the electronic device When a user uses an electronic device to shoot, he can open the related shooting software in the electronic device, such as a "camera". After opening the shooting software, the electronic device can obtain a preview image of the full field of view through the lens in the camera.
- the preview image As the first image as an example.
- the first image may be the first frame image acquired by the electronic device when the user starts shooting. It is understandable that the shooting target that the user wants to shoot will be included in the full field of view, and therefore, the first image will include an image corresponding to the shooting target.
- Generating the first image may include performing image processing on the image data collected by the camera by the processor to obtain the first image.
- the image processing may include at least one of the following: automatic white balance, color calibration, gamma calibration, magnification, sharpening, noise reduction, or image enhancement.
- the electronic device determines a target location, where the target location is included in an area where the image of the shooting target is in the first image. After acquiring the first image, the electronic device needs to determine the location of the shooting target. In the embodiments of the present application, the electronic device may determine the target position by using various methods, and use this as a reference to determine the position of the image of the shooting target in the first image.
- the electronic device may detect whether a user's touch operation is received, and determine the position of the image of the shooting target in the first image according to the detection result. For example, when receiving a user's touch operation, the electronic device determines the position in the first image corresponding to the touch operation as the target position. When the user's touch operation is not received, the electronic device determines the preset position of the first image as the target position.
- the preset position may be preset in the electronic device, or may be set by the user. The embodiment of the application does not limit this.
- the electronic device can display the first image on the shooting interface through its display screen, so that the user can touch the first image of the shooting target on the display screen.
- the electronic device is controlled to locate the shooting target to the location. Therefore, the electronic device can determine that the touch position is included in the contour corresponding to the shooting target in the first image when receiving the touch operation input by the user.
- the above example is described by taking the operation input by the user as a touch operation as an example.
- the user can also control the electronic device to locate the shooting target by input sliding, double-clicking or other operations. Location, this embodiment of the application does not limit this.
- the electronic device may also determine the corresponding position of the shooting target in the first image in other ways. For example, obtaining corresponding information through an external device connected to the electronic device to determine the target location, etc.
- the electronic device divides the first image to obtain M regions, where M is an integer greater than or equal to 3, and features of different regions in the M regions are different.
- the electronic device may divide the first image into multiple regions according to the feature difference between any two adjacent pixels in the first image. It is understandable that when the feature difference between the two pixels is large, it indicates that the objects not displayed by the two pixels are quite different. Therefore, the two pixels can be divided into different regions.
- the feature difference between two pixels is small, it indicates that the difference between the two pixels used for display in the first image is small. Therefore, the two pixels can be divided into the same area.
- the feature of the pixel may be the gray value of the pixel, the value of any one of the three RGB channels corresponding to the pixel, or the coordinates in a three-dimensional coordinate system composed of the three RGB channels corresponding to the pixel.
- the feature of the pixel may also be other pixel features that can distinguish different pixels. The embodiment of the application does not limit this. The following description will be given by taking the characteristic of the pixel as the gray value as an example.
- the electronic device may use a super pixel segmentation algorithm to segment the first image (for example, it is called super pixel segmentation).
- M regions can be obtained, where M is an integer greater than or equal to 3, and different regions in the M regions have different characteristics.
- each of the above-mentioned M regions may correspond to a super pixel after the super pixel is divided.
- a super pixel can include one or more pixels.
- a minimum spanning tree method may be used to process the pixels in the first image. For example, divide pixels with close gray values into the same superpixel, and divide pixels with larger feature differences into different superpixels.
- the electronic device may map the first image into an undirected graph G(V, E).
- the undirected graph includes i pixels in the first image.
- one pixel may also correspond to one node.
- v i can be used to identify the i-th node, and v i ⁇ V. Connection is formed between the edges of adjacent pixels, i.e., (v i, v j) ⁇ E , each edge may be used for identification of neighbor relation between the i-th node and the j th node.
- the electronic device may determine undirected graph G (V, E) between the right side of every two adjacent nodes weight w (v i, v j) .
- the weight w(v i , v j ) can be determined by formula (1).
- the electronic device can divide the pixels corresponding to the nodes in the undirected graph G(V, E) into different regions according to the weighting situation to form a plurality of super pixels including one or more pixels.
- the pixels in the same superpixel correspond to the edges between the nodes with smaller weights, that is, the gray values of the pixels in the same superpixel are relatively close, and the pixels in different superpixels correspond to the weights of the edges between the nodes. Larger, that is, the gray values of pixels in different superpixels have greater differences.
- FIG. 5 is a schematic diagram of a super pixel segmentation provided by an embodiment of this application.
- the electronic device can respectively determine the weight values of the edges formed by the nodes corresponding to different pixels in the first image, that is, the difference in the gray values of different pixels. And divide the pixels with small gray value difference into one super pixel, and divide the pixels with gray value difference into different super pixels.
- the result of super-pixel division of the first image is shown in (c) of FIG. 5. Among them, each area corresponds to a super pixel.
- the electronic device can merge the eligible regions according to Rule_merging to eliminate the fragmentation of the region, so that the electronic device can more accurately determine the complete outline of the shooting target.
- Rule_merging is an exemplary description of the region merging process.
- the electronic device may process the first image, the result of the superpixel segmentation, and the target position through a region merging algorithm, so as to realize the merging of regions that meet the preset rules.
- the merging process may include the following S404-S407.
- the electronic device marks the M areas to obtain P background areas, Q unmarked areas, and target areas. After the electronic device acquires the M regions, it can label different regions according to the first image and the target position, so as to realize the classification of the M regions. In the embodiment of the present application, this process may also be referred to as the initialization of the first image.
- the electronic device can divide the first image into P background regions that intersect the edge of the first image, including the object region at the target location, and The other Q non-marker regions.
- the background area may be marked as R B
- the target area may be marked as R O
- the unmarked area may be marked as R N.
- the electronic device can determine the edge of the first image from the first image, and the mark area of the edge intersects R B.
- the electronic device according to a target location, determining the location of the target region, and the region labeling is R O.
- the other areas may be marked as R N.
- the electronic device may determine the upper edge according to the first image, and mark the area intersecting the upper edge among the divided M regions as R B. As shown in Fig. 6, the electronic device can determine that R1, R2, R3, R4, R5, R6, R7, and R8 intersect with the upper edge.
- R1-R8 can be marked as R B respectively .
- the electronic device can determine the left edge, right edge, and lower edge, and mark the area intersecting each edge as R B. Further, the electronic device may also be marked as a target location Area R O. Exemplary, as shown in FIG 6, when the electronic device determines the position of the target position of the black circle in FIG. 6 according to a user's touch operation, the electronic device may be the area where the target position mark is R O. For other areas, the electronic device can mark them as R N (not shown in FIG. 6).
- the target position is included in one of the divided regions as an example for description.
- the target location determined by the electronic device may be an area composed of multiple pixels (for example, referred to as an input target area).
- the electronic device may be a plurality of regions each region intersecting all of the input target area are marked as R O.
- R O the target position included in one of the M regions in the first image as an example for description.
- the electronic device merges one or more of the Q unmarked areas into the background area according to the P background areas.
- background area is marked, i.e. each area of the region R B
- the electronic device may be a high degree of similarity and the adjacent unmarked areas, i.e. incorporated into the R N R B in order to expand the R B Scope.
- the electronic device may be the remaining R N were combined with each other, and repeats the above R N to R B are combined, so that the range of maximally extended R B's.
- the above merging process cannot be continued, it means that all the background in the first image has been extracted.
- s i Q is the i-th area adjacent to R.
- the aforementioned merging condition may also be referred to as Rule_merging.
- the electronic device when starting to merge regions, can first obtain the features of each region through region feature extraction (in the embodiments of the present application, the feature of the region may be referred to as Hist).
- Hist the feature of the region
- the similarity of hist in different regions is used as a measure of the similarity of two adjacent regions.
- Bhattacharyya Distance can be selected to characterize the similarity of two vectors.
- the Bhattacharyya distance D B is as shown in formula (2).
- the similarity measurement of any two regions can be performed by the above formula (2) to obtain the similarity of the two regions.
- the similarity measurement of the two regions can be performed using the Bhattacharyya distance.
- the corresponding features are Hist R and Hist Q respectively
- the Bhattacharyya distance ⁇ (R, Q) is as shown in formula (3).
- the Bhattacharyya distance ⁇ is referred to as the similarity of the two regions.
- S405 may include S701-S703.
- the electronic device may perform a first merging process for each of the P background areas.
- the first merging process is: determining whether the first area exists, and when the first area exists, merging the first area into the second area.
- the first area is an unmarked area with the highest similarity to the second area among all areas adjacent to the second area, and the second area is any area of the N background areas.
- FIG. 8 shows a schematic diagram of a first merging process.
- the second area is R1 as an example for description.
- the area adjacent to R1 includes background areas R2 and R3, and unmarked areas R4, R5, R6, and R7.
- the electronic device can determine the first area by the following methods:
- R1 and R2 calculate the similarities between R1 and R2, R1 and R3, R1 and R4, R1 and R5, R1 and R6, and R1 and R7 respectively.
- the similarity between R1 and R2 is marked as ⁇ (R1, R2)
- the similarity between R1 and R3 is marked as ⁇ (R1, R3)
- the similarity between R1 and R4 is marked as ⁇ (R1, R4)
- the similarity between R1 and R5 is marked as ⁇ (R1, R4).
- the similarity of is marked as ⁇ (R1, R5)
- the similarity between R1 and R6 is marked as ⁇ (R1, R6)
- the similarity between R1 and R7 is marked as ⁇ (R1, R7);
- ⁇ 1 is any one of ⁇ (R1, R4), ⁇ (R1, R5), ⁇ (R1, R6), ⁇ (R1, R7)
- the electronic device determines that the first area exists, and corresponds to ⁇ 1
- the electronic device may merge the first area into R1.
- the electronic device can merge R4 into R1 to obtain an updated R1 (ie
- the updated R1 includes R1 and R4 as shown in (a) in FIG. 8).
- the proportion of the background area in the first image becomes larger, and correspondingly, the proportion of the unmarked area in the first image becomes smaller.
- the background area does not include pixels corresponding to the shooting target in the first image.
- the pixels in the first image that correspond to the target in the first image other than the pixels in the first image can be effectively classified into the background area, so as to accurately determine that the target is in the background.
- S702 When the first area does not exist, perform a second merging process for each of the unmarked areas that are not merged into the background area.
- the first area does not exist, that is, the ⁇ 1 determined by the electronic device is one of ⁇ (R1, R2) or ⁇ (R1, R3), it indicates that the unmarked area adjacent to R1 is similar to R1. Less than the degree of similarity between R1 and other background areas.
- the electronic device stops the above-mentioned first merging process, and instead executes the second merging process for each of the unmarked areas that are not merged into the background area.
- the second merging process is: determining whether a third area exists, and when the third area exists, merging the third area into the fourth area, and re-executing the first merging process.
- the third area is an unmarked area with the highest similarity among all areas adjacent to the fourth area, and the fourth area is any area of the unmarked areas that are not merged into the background area.
- the electronic device may perform the second merging process for each unmarked area in R5, R6, and R7, respectively.
- R5, R6, and R7 take the fourth area as R6 as an example.
- the electronic device determines the similarity between R6 and other adjacent areas.
- the similarity between R6 and R5 is ⁇ (R6, R5)
- the similarity between R6 and R7 is ⁇ (R6, R7)
- the similarity between R6 and R1 is ⁇ (R6, R1).
- ⁇ 2 is one of ⁇ (R6, R5) or ⁇ (R6, R7)
- the electronic device can determine that the third area is the corresponding unmarked area.
- ⁇ 2 is ⁇ (R6, R1)
- the electronic device can determine that the third area does not exist.
- one cycle of the first merging process and the second merging process is completed.
- the electronic device merges the unmarked area that is not merged into the background area with the target area to obtain a region of interest (ROI) of the image that includes the shooting target.
- ROI region of interest
- the background area can be expanded to the greatest extent by merging the unmarked areas that comply with Rule_merging.
- the background area cannot be expanded, that is, the remaining unmarked areas have a large difference in similarity with the background area, it can be considered that the images corresponding to these remaining unmarked areas in the first image can represent a part of the characteristics of the shooting target. Therefore, in the embodiment of the present application, the unmarked area that has not been merged into the background area can be merged with the target area including the target position.
- the area obtained after the merging is the ROI corresponding to the shooting target in the first image.
- the first image is the image shown in (a) in FIG. 5 as an example.
- the segmentation result of the background area and the target area as shown in FIG. 9 can be obtained. It can be seen that the complete outline of the shooting target (such as the bird in Fig. 9) has been accurately segmented as the target area.
- the electronic device determines the focus area according to the ROI.
- the focus area may include at least part of the ROI.
- the focus area may include all pixels included in the ROI, or may include some pixels in the ROI.
- the electronic device may determine the focus area according to the target position and the area corresponding to the ROI.
- the focus area may be a rectangular focus window or a non-rectangular focus area. The following takes a focus window with a rectangular focus area as an example for description.
- the electronic device may start from the target position and expand the four sides of the initial focus window in units of pixels.
- the initial focus window corresponds to the target position.
- the target position may coincide with the geometric center of the initial focus window.
- the number of pixels where the straight line on each of the four sides of the initial focus window coincides with the ROI is obtained.
- the number of pixels in which the line where the first side is located coincides with the ROI is greater than the first threshold, continue to expand the first side. If the number of pixels where the line where the first side is located overlaps the ROI is less than the first threshold, stop expanding the first side, which is any one of the four sides. After all four sides stop expanding, the rectangle formed by the four sides that stopped expanding is the focus window.
- the first threshold value is 2 as an example for description.
- the electronic device can use the target position as the starting point and the pixel as the unit to scan the surroundings row by row/column by row. For example, referring to (a) in FIG. 10, the electronic device can start from the target position and expand out by one row/column of pixels at a time. After the first expansion, the scanning positions are as shown in Figure 10 (a) where side a, side b, side c, and side d are located. Regarding any one of the sides (such as side a), the electronic device can determine the number of overlaps between the side a and the pixels included in the ROI every time the side a is expanded.
- the side a is expanded to the upper row of pixels adjacent to the target position (that is, row 3#).
- the number of pixels overlapping the side a and the ROI is 5, and the electronic device can determine that the number of overlapping pixels at this time is greater than the first threshold, and therefore, the side a can continue to be extended upward.
- side a is expanded for the second time, side a is expanded to line 2#.
- the number of pixels where a overlaps with the ROI is one, and the electronic device can determine that the number of overlapped pixels at this time is less than the first threshold. , Therefore, stop this side a from expanding upward.
- the electronic device can determine where side b, side c, and side d stop according to the above method. For example, as shown in (b) in Fig. 10, side b stops at row 7#, side c stops at column 3#, and side d stops at column 7#. In this way, the electronic device can determine the rectangle enclosed by side a, side b, side c, and side d as the focus window. In the embodiments of the present application, this method may also be referred to as an extended method. For example, by processing the first image as shown in FIG. 9 according to the above method, the focus window as shown in FIG. 11 can be obtained. It can be seen that the focus window can accurately locate the position of the shooting target (such as the bird in FIG. 11) in the first image, thereby ensuring that the shooting target can be shot with high quality after focusing according to the focus window.
- the focus window can accurately locate the position of the shooting target (such as the bird in FIG. 11) in the first image, thereby ensuring that the shooting target can be shot with high quality
- the electronic device may determine the minimum circumscribed rectangle of the ROI according to the ROI, and use the minimum circumscribed rectangle as the focus window.
- the focus window determined according to this method is shown in (c) of FIG. 10. In the embodiments of the present application, this method may also be referred to as the minimum bounding rectangle method.
- the focus window is a rectangle as an example for description. In other embodiments, the focus window can also be any other shape.
- the determination method is similar to the above method. Go into details again.
- the electronic device may superimpose and display the focus window on the first image through its display screen, so that the user can clearly know the focus area of the electronic device.
- the electronic device may no longer display the focus window on the display screen, or provide the user with the focus window information in other forms, which is not limited in the embodiment of the present application.
- the focus window determined according to the minimum bounding rectangle method is larger than the focus window determined according to the expansion method. Therefore, the focus window determined according to the minimum circumscribed window method can include more pixels of the shooting target, and focusing according to the focus window can ensure the overall focus accuracy of the shooting target.
- the focus window acquired according to the expansion method can effectively reduce the influence of the edge of the shooting target on focusing, and focusing according to the focus window can more accurately adjust the focus of the shooting target closer to the central area.
- those skilled in the art can flexibly select according to actual needs, which is not limited in the embodiment of the present application.
- the electronic device can perform automatic focus according to the focus window.
- the electronic device may adjust the focal length of the lens through an auto focus (AF) algorithm according to the relevant information of the focus window, such as the position information of the four sides of the focus window corresponding to the first image, so as to realize the automatic focusing of the camera. Adjust the focus of the camera to the shooting target position.
- AF auto focus
- the method may include S1201-S1211.
- S1201. The electronic device generates a first image, where the first image includes an image of a shooting target.
- the electronic device determines a target location, where the target location is included in the area where the image of the shooting target is in the first image.
- S1203. The electronic device divides the first image according to the superpixel segmentation algorithm to obtain M regions, where M is an integer greater than or equal to 3, and features of different regions in the M regions are different.
- M is an integer greater than or equal to 3
- the electronic device divides the first image into R B , R O and R N according to the first image, the superpixel segmentation result, and the target position. S1205.
- the electronic device performs a first merging process according to Rule_merging.
- the first merging process is similar to the first merging process in S701 shown in FIG. 7. That is, the first merging process is: determining whether the first area exists, and when the first area exists, merging the first area into the second area.
- the first area is an unmarked area with the highest similarity to the second area among all areas adjacent to the second area, and the second area is any area of the N background areas.
- the electronic device determines whether any areas are merged in S1205. When any areas are merged, S1205 is repeatedly executed. When no areas are merged, the following S1207 is executed. S1207.
- the electronic device performs a second merging process according to Rule_merging.
- the first merging process is similar to the first merging process in S702 shown in FIG. 7. That is, when the first area does not exist, the electronic device executes the second merging process, and the second merging process is: determining whether there is a third area, and when the third area exists, the third area is merged into the fourth area. Area, and re-execute the first merging process.
- the third area is an unmarked area with the highest similarity among all areas adjacent to the fourth area, and the fourth area is any area of the unmarked areas that are not merged into the background area.
- the electronic device determines whether any areas in S1207 are merged. When any areas are merged, the above S1207 is executed repeatedly. When no areas are merged, the following S1209 is executed. It should be noted that when the electronic device executes the above-mentioned first merging process and the second merging process, since each execution of a merge is completed, both R N and R O are updated, and its Hist will also change. Therefore, the electronic device may respectively determine the Hist of all regions corresponding to R N and R O in the first image after each execution of the first merging process or the second merging process, so as to determine whether to perform the next merging.
- the electronic device judges whether the storage area can perform the above-mentioned first merging process or the second merging process. When there is a corresponding area, the above S1205 is repeatedly executed, and when there is no corresponding area, the following S1210 is executed. S1210. The electronic device merges R N that has not been merged into R B into R O to obtain an ROI corresponding to the shooting target. S1211. The electronic device determines a focus window according to the ROI corresponding to the shooting target.
- FIG. 13 shows the first image generated by the electronic device.
- S1201-S1204 in FIG. 12 the segmented first image as shown in (b) in FIG. 13 can be obtained.
- the electronic device can obtain the first image shown in (a) of FIG. 14. By dividing, multiple regions as shown in (b) in FIG. 14 can be obtained. Through region merging, you can obtain the ROI (shown in white outline in the figure) and the corresponding focus window (shown in black in the figure) corresponding to the shooting target (that is, the flying bird in the figure) as shown in Figure 14 (c) The rectangle is shown). For another example, as shown in FIG. 15, according to the focusing method shown in FIG. 12, the electronic device can obtain the first image shown in (a) of FIG. 15. Through the division, multiple regions as shown in (b) in FIG. 15 can be obtained. Through region merging, you can obtain the ROI corresponding to the shooting target (i.e.
- the electronic device can obtain the first image shown in (a) of FIG. 16. By dividing, multiple regions as shown in (b) in FIG. 16 can be obtained. Through region merging, you can obtain the ROI corresponding to the shooting target (i.e. the rider in the figure) as shown in (c) in Figure 16 (shown as a white outline in the figure) and the corresponding focus window (shown as a black rectangle in the figure) out).
- the shooting target i.e. the rider in the figure
- the corresponding focus window shown as a black rectangle in the figure
- the electronic device can obtain the first image shown in (a) of FIG. 17. By dividing, multiple regions as shown in (b) in FIG. 17 can be obtained. Through region merging, you can obtain the ROI (shown as a white outline in the figure) and the corresponding focus window (shown as a black rectangle in the figure) corresponding to the shooting target (ie, the phone booth in the figure) as shown in (c) in Figure 17 out). For another example, as shown in FIG. 18, according to the focusing method shown in FIG. 12, the electronic device can obtain the first image shown in (a) of FIG. 18. By dividing, multiple regions as shown in (b) in FIG. 18 can be obtained.
- the first image involved in the above focusing method may be the first frame of image acquired by the electronic device when the electronic device starts shooting under the control of the user.
- the electronic device can determine the ROI corresponding to the shooting target according to the first frame of image, and focus accordingly.
- the relative position of the electronic device or the shooting target may change during the shooting process, so that other frame images acquired by the electronic device after the first frame image are changed from the first frame image, and the corresponding focus window It also needs to be adjusted accordingly.
- the electronic device after determining and focusing on the ROI corresponding to the first image, the electronic device may also use a tracking algorithm to track the parameters of the ROI corresponding to each subsequent frame of image and output the tracking result.
- the electronic device can determine whether the ROI needs to be adjusted according to the tracking result, so that the focal length can be adjusted adaptively.
- this method may be referred to as an image tracking method.
- the method may include S1901-1907. S1901.
- the electronic device generates a first image and determines a target location.
- S1902 the electronic device obtains an initial focus ROI through a superpixel segmentation algorithm and a region merging algorithm according to the first image and the target position.
- the specific execution process of the foregoing S1901-S1902 is similar to the focusing method shown in FIG. 4 or FIG. 12, and will not be repeated here.
- the electronic device configures the focus window 1 according to the auto focus algorithm. S1904.
- the electronic device tracks the ROI in real time through a tracking algorithm to obtain a tracking result.
- the tracking result may include ROI and/or change information of the focus window 1.
- the order of execution of S1903 and S1904 may be fixed. For example, S1903 and S1904 are executed simultaneously, or S1903 is executed before S1904, or S1903 is executed after S1904.
- the execution sequence of S1903 and S1904 can be set according to actual conditions. The embodiment of the application does not limit this.
- the electronic device determines whether the focus window needs to be adjusted according to the tracking result and the focus window 1 through the auto focus algorithm.
- the electronic device may determine the difference between the ROI corresponding to the first image and/or the focus window 1 corresponding to the first image and the ROI in the tracking result and/or the focus window (such as focus window 2) in the tracking result Do you need to adjust the focus window for focus adjustment?
- the electronic device may calculate the difference in the coordinate position of the geometric center point of the ROI corresponding to different frame images through the AF algorithm (for example, the difference is referred to as Distance). Compare the relationship between Distance and the first threshold to determine whether the focus window needs to be adjusted. For example, when the Distance is greater than the first threshold, it is determined that the focus window needs to be adjusted. When the Distance is less than the first threshold, it is determined that there is no need to adjust the focus window.
- the first threshold may be a distance of 20 pixels or other distances close to the distance of 20 pixels. It should be noted that the first threshold may be preset or flexibly adjusted, which is not limited in the embodiment of the present application.
- the electronic device may calculate the area size of the ROI corresponding to different frame images by using the AF algorithm. For example, the area of the ROI corresponding to the first frame image is last_area, and the ROI area corresponding to the i-th frame image acquired after the first frame is current_area.
- the second threshold may be preset or flexibly adjusted, which is not limited in the embodiment of the present application.
- the electronic device may calculate the area size of the ROI corresponding to different frame images by using the AF algorithm.
- the area of the ROI corresponding to the first frame image is last_area
- the ROI area corresponding to the i-th frame image acquired after the first frame is current_area.
- the electronic device can determine the relationship between the difference between the areas of the two ROIs (for example, the difference is m) in the last_area and the third threshold, and determine whether the focus window needs to be adjusted. For example, when m/last_area is greater than the third threshold, it is determined that the focus window needs to be adjusted. When m/last_area is less than the third threshold, it is determined that the focus window does not need to be adjusted.
- the third threshold may be 15% or a value close to 15%. It should be noted that the third threshold may be preset or flexibly adjusted, which is not limited in the embodiment of the present application.
- the following S1906 is executed.
- the focus window does not need to be adjusted, the following S1907 is executed.
- the electronic device dynamically configures the focus window through the focus algorithm, and performs focus according to the adjusted focus window.
- S1907 The electronic device performs focusing according to the initial focusing window 1.
- the electronic device can perform focusing through the AF algorithm. If the electronic device uses the AF algorithm to focus according to the focus window 1 and the focus window is updated (that is, the focus window is dynamically configured), the electronic device can focus according to the updated focus window through the AF algorithm.
- the image tracking method in the above example is related to the applied patent "A focusing device, method and related equipment" (application number: PCT/CN2018/103370, application date: 2018/08/30) The method is similar and can be used as a reference in the specific implementation process.
- the electronic device can accurately determine all the areas included in the background area, and then use the remaining area as the target area, which can accurately and effectively determine the complete contour of the shooting target in the first image. Effectively improve the focus accuracy of the shooting target, and then take a clear shot of the shooting target, so as to improve the image quality of the captured image and the filming rate. Therefore, focusing based on the ROI and the focus window determined by the above method can accurately focus on the shooting target. Especially when there are multiple depths of field and multiple target hollows in the first image, interference can be effectively eliminated, and the focus can be adjusted to the position of the shooting target.
- the electronic device can determine the ROI and the focus window based on the complete outline of the shooting target in the first image, it can adaptively adjust the size of the focus window to avoid the size of the focus window and the image size of the shooting target in the first image.
- the electronic device can determine the ROI and the focus window according to the complete contour of the shooting target in the first image, there is no problem that the image of the shooting target in the first image drifts out of the focus window.
- the electronic device can also track the ROI, so that when the shooting target and the electronic device move relative to each other, the electronic device can flexibly adjust the focus window by itself, thereby achieving accurate focus.
- the above-mentioned electronic equipment may include a focusing device to implement the above-mentioned focusing method.
- the focusing device includes corresponding hardware structures and/or software modules for performing various functions.
- the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
- the embodiment of the present application may divide the function modules of the focusing device according to the foregoing method examples.
- each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
- the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules.
- the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
- FIG. 20 shows a schematic diagram of the composition of a focusing device 2000.
- the focusing device 2000 can be a chip in an electronic device or a system on a chip.
- the focusing device 2000 shown in FIG. 20 includes: a generating unit 2001, a determining unit 2002, an acquiring unit 2003, and a combining unit 2004.
- the generating unit 2001 is configured to generate a first image, and the first image includes an image of a shooting target.
- the generating unit 2001 may be used to execute S401 as shown in FIG. 4.
- the generating unit 2001 may also be used to perform S1201 as shown in FIG. 12.
- the determining unit 2002 is configured to determine the target position in the first image.
- the determining unit 2002 may be used to perform S402 as shown in FIG. 4.
- the determining unit 2002 may also be used to perform S1202 as shown in FIG. 12.
- the acquiring unit 2003 is configured to divide the first image into P background regions, target regions, and Q unmarked regions according to the target position.
- the target area includes the target position
- the Q unmarked areas are the Q areas in the first image excluding the P background areas and the target area
- P and Q are both positive integers.
- the acquiring unit 2003 may be used to execute S403-S404 as shown in FIG. 4.
- the acquiring unit 2003 may also be used to execute S1203-S1204 as shown in FIG. 12.
- the merging unit 2004 is configured to merge one or more of the Q unmarked areas into at least one background area of the P background areas according to the P background areas.
- the merging unit 2004 is further configured to merge at least one unmarked area of the Q unmarked areas excluding the one or more areas into the target area to obtain a region of interest (ROI) including the shooting target.
- ROI region of interest
- the merging unit 2004 may be used to perform S405 as shown in FIG. 4.
- the merging unit 2004 can also be used to execute S701-S703 as shown in FIG. 7.
- the merging unit 2004 can also be used to execute S1205-S1210 as shown in FIG. 12.
- the determining unit 2002 is further configured to determine a focus area according to the ROI, and the focus area includes at least part of the ROI.
- the determining unit 2002 and the merging unit 2004 may be used to perform S406 as shown in FIG. 4.
- the determining unit 2002 and the merging unit 2004 may also be used to perform S1211 as shown in FIG. 12.
- any one of the P background areas is in contact with the edge of the first image.
- the determining unit 2002 is configured to determine the preset position in the first image as the target position.
- the determining unit 2002 is configured to determine a location in the first image corresponding to the instruction as the target location according to an instruction of the user.
- the obtaining unit 2003 is configured to perform super pixel segmentation on the first image to obtain M regions, where M is an integer greater than or equal to 3.
- the obtaining unit 2003 is further configured to obtain the target area in the M areas according to the target position.
- the acquiring unit 2003 is further configured to acquire the P background regions in the M regions.
- the acquiring unit 2003 is further configured to determine the Q areas of the M areas except the P background areas and the target area as the Q unmarked areas.
- the merging unit 2004 is configured to perform at least one merging operation for at least one background area among the P background areas: determining whether there is a first area, which is an unmarked area and Adjacent to each background area, and among at least one first adjacent area adjacent to each background area, the first area and each background area have the highest feature similarity. When the first area exists, the first area is merged into each background area.
- the merging unit 2004 is also used to, each time a merging operation is performed, for each of the one or more unmarked areas in the Q unmarked areas except the first area. Marked area, to determine whether there is a second area, the second area is an unmarked area and adjacent to each unmarked area, and in at least one second adjacent area adjacent to each unmarked area The second area has the highest feature similarity with each unmarked area. When the second area exists, the second area is merged into each unmarked area.
- the focus area is a rectangular focus window
- the rectangular focus window includes four sides, and each of the four sides has a number of pixels that coincide with the ROI is less than a preset threshold.
- one or more units involved in FIG. 20 above can be implemented by software, hardware, or a combination of the two, which is not limited in this embodiment.
- the software may be stored in a memory in the form of computer instructions, and the hardware may include a logic circuit, an analog circuit, or an algorithm circuit, etc., for example, the hardware may be located in a chip.
- the corresponding functions of the generating unit 2001 and/or the determining unit 2002 and/or the obtaining unit 2003 and/or the merging unit 2004 shown in FIG. 20 may be implemented by the processor 110 shown in FIG. 2.
- the focusing device provided in the embodiment of the present application is used to perform the function of the electronic device in the above focusing method, and therefore can achieve the same effect as the above focusing method.
- the focusing device provided by the embodiment of the present application may include a generating unit 2001 and/or a determining unit 2002 and/or an acquiring unit 2003 and/or a merging unit 2004 for supporting the foregoing The processing module or control module that completes the corresponding function.
- FIG. 21 shows a schematic diagram of the composition of an electronic device 2100.
- the electronic device 2100 may include: a processor 2101 and a memory 2102.
- the memory 2102 is used to store computer execution instructions.
- the electronic device 2100 when the processor 2101 executes the instructions stored in the memory 2102, the electronic device 2100 is caused to execute one or more steps in S401-S407 as shown in FIG. 4, or execute such as 7 shows one or more steps of S701-S703, or executes one or more steps of S1201-S1211 shown in FIG. 12, or executes one or more of S1901-S1907 shown in FIG. 19 Steps, and other operations that the electronic device needs to perform.
- the electronic device 2100 may be the electronic device 100 as shown in FIG. 2.
- the functions of the processor 2101 may be implemented by the processor 110.
- Related functions of the memory 2102 can be implemented by a device with a storage function provided by the internal memory 121 and/or an external memory connected through the external memory interface 120.
- FIG. 22 shows a schematic diagram of the composition of a chip system 2200.
- the chip system 2200 may include: a processor 2201 and a communication interface 2202, which are used to support electronic devices to implement the functions involved in the foregoing embodiments.
- the chip system 2200 also includes a memory for storing necessary program instructions and data for the terminal.
- the chip system 2200 may be composed of chips, or may include chips and other discrete devices.
- the chip system 2200 may be included in the electronic device 100 as shown in FIG. 2.
- the function corresponding to the processor 2201 may be implemented by the processor 110 as shown in FIG. 2.
- the focusing device provided by the embodiment of the present application is used to perform the function of the terminal in the above focusing method, and therefore can achieve the same effect as the above focusing method.
- the functions or actions or operations or steps in the foregoing embodiments can be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
- a software program When implemented using a software program, it can be implemented in the form of a computer program product in whole or in part.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or include one or more data storage devices such as servers, data centers, etc. that can be integrated with the medium.
- the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
Certains modes de réalisation de la présente invention se rapportent au domaine technique du traitement d'images. L'invention concerne un procédé et un appareil de mise au point ainsi qu'un dispositif électronique qui sont capables d'améliorer la précision de mise au point d'une cible de photographie de façon à obtenir une photographie nette de la cible de photographie, ce qui permet d'améliorer la qualité d'image d'une image capturée et le rendement de photographie. La solution spécifique comprend les étapes suivantes : générer une première image, la première image comprenant une image d'une cible de photographie ; déterminer une position cible dans la première image ; diviser la première image en P régions d'arrière-plan, en une région cible et en Q régions non marquées selon la position cible ; incorporer, selon les P régions d'arrière-plan, une ou plusieurs régions parmi les Q régions non marquées dans au moins une région d'arrière-plan parmi les P régions d'arrière-plan ; incorporer au moins une région non marquée parmi les Q régions non marquées autres que lesdites régions dans la région cible pour obtenir une région d'intérêt (ROI) comprenant la cible de photographie ; et déterminer une région de mise au point en fonction de la ROI, la région de mise au point comprenant au moins une partie de la ROI.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/078677 WO2021179186A1 (fr) | 2020-03-10 | 2020-03-10 | Procédé et appareil de mise au point, et dispositif électronique |
CN202080000988.5A CN113711123B (zh) | 2020-03-10 | 2020-03-10 | 一种对焦方法、装置及电子设备 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/078677 WO2021179186A1 (fr) | 2020-03-10 | 2020-03-10 | Procédé et appareil de mise au point, et dispositif électronique |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021179186A1 true WO2021179186A1 (fr) | 2021-09-16 |
Family
ID=77671099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/078677 WO2021179186A1 (fr) | 2020-03-10 | 2020-03-10 | Procédé et appareil de mise au point, et dispositif électronique |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113711123B (fr) |
WO (1) | WO2021179186A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116055712A (zh) * | 2022-08-16 | 2023-05-02 | 荣耀终端有限公司 | 成片率确定方法、装置、芯片、电子设备及介质 |
CN116074624A (zh) * | 2022-07-22 | 2023-05-05 | 荣耀终端有限公司 | 一种对焦方法和装置 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114302035B (zh) * | 2021-12-13 | 2024-06-28 | 杭州海康慧影科技有限公司 | 一种图像处理方法、装置、电子设备及内窥镜系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130081995A (ko) * | 2012-01-10 | 2013-07-18 | 엘지전자 주식회사 | 고속 자동 초점 기능을 구비한 카메라 장치 |
CN104270562A (zh) * | 2014-08-15 | 2015-01-07 | 广东欧珀移动通信有限公司 | 一种拍照对焦方法和拍照对焦装置 |
CN107850753A (zh) * | 2015-08-31 | 2018-03-27 | 索尼公司 | 检测设备、检测方法、检测程序和成像设备 |
CN109714526A (zh) * | 2018-11-22 | 2019-05-03 | 中国科学院计算技术研究所 | 智能摄像头及控制系统 |
JP2019103031A (ja) * | 2017-12-05 | 2019-06-24 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法およびプログラム |
US20190273857A1 (en) * | 2018-03-05 | 2019-09-05 | JVC Kenwood Corporation | Image pickup apparatus, image pickup method, and recording medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007077283A1 (fr) * | 2005-12-30 | 2007-07-12 | Nokia Corporation | Procede et dispositif de reglage de l'autofocalisation d'une camera video par suivi d'une region d'interet |
-
2020
- 2020-03-10 WO PCT/CN2020/078677 patent/WO2021179186A1/fr active Application Filing
- 2020-03-10 CN CN202080000988.5A patent/CN113711123B/zh active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130081995A (ko) * | 2012-01-10 | 2013-07-18 | 엘지전자 주식회사 | 고속 자동 초점 기능을 구비한 카메라 장치 |
CN104270562A (zh) * | 2014-08-15 | 2015-01-07 | 广东欧珀移动通信有限公司 | 一种拍照对焦方法和拍照对焦装置 |
CN107850753A (zh) * | 2015-08-31 | 2018-03-27 | 索尼公司 | 检测设备、检测方法、检测程序和成像设备 |
JP2019103031A (ja) * | 2017-12-05 | 2019-06-24 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法およびプログラム |
US20190273857A1 (en) * | 2018-03-05 | 2019-09-05 | JVC Kenwood Corporation | Image pickup apparatus, image pickup method, and recording medium |
CN109714526A (zh) * | 2018-11-22 | 2019-05-03 | 中国科学院计算技术研究所 | 智能摄像头及控制系统 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116074624A (zh) * | 2022-07-22 | 2023-05-05 | 荣耀终端有限公司 | 一种对焦方法和装置 |
CN116074624B (zh) * | 2022-07-22 | 2023-11-10 | 荣耀终端有限公司 | 一种对焦方法和装置 |
CN116055712A (zh) * | 2022-08-16 | 2023-05-02 | 荣耀终端有限公司 | 成片率确定方法、装置、芯片、电子设备及介质 |
CN116055712B (zh) * | 2022-08-16 | 2024-04-05 | 荣耀终端有限公司 | 成片率确定方法、装置、芯片、电子设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113711123A (zh) | 2021-11-26 |
CN113711123B (zh) | 2022-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021136050A1 (fr) | Procédé de photographie d'image et appareil associé | |
US11798162B2 (en) | Skin detection method and electronic device | |
US11782554B2 (en) | Anti-mistouch method of curved screen and electronic device | |
WO2020140726A1 (fr) | Procédé de photographie et dispositif électronique | |
CN112712470B (zh) | 一种图像增强方法及装置 | |
WO2021179186A1 (fr) | Procédé et appareil de mise au point, et dispositif électronique | |
WO2021185250A1 (fr) | Procédé et appareil de traitement d'image | |
CN114140365B (zh) | 基于事件帧的特征点匹配方法及电子设备 | |
WO2020015149A1 (fr) | Procédé de détection de ride et dispositif électronique | |
CN112087649B (zh) | 一种设备搜寻方法以及电子设备 | |
WO2021057626A1 (fr) | Procédé de traitement d'image, appareil, dispositif et support de stockage informatique | |
WO2022206589A1 (fr) | Procédé de traitement d'image et dispositif associé | |
WO2021175097A1 (fr) | Procédé d'imagerie d'objet hors ligne de visée, et dispositif électronique | |
WO2023011302A1 (fr) | Procédé de photographie et appareil associé | |
CN114283195B (zh) | 生成动态图像的方法、电子设备及可读存储介质 | |
CN115150542B (zh) | 一种视频防抖方法及相关设备 | |
CN113781548B (zh) | 多设备的位姿测量方法、电子设备及系统 | |
CN114390195B (zh) | 一种自动对焦的方法、装置、设备及存储介质 | |
WO2022033344A1 (fr) | Procédé de stabilisation vidéo, dispositif de terminal et support de stockage lisible par ordinateur | |
CN117880645A (zh) | 一种图像处理的方法、装置、电子设备及存储介质 | |
CN114302063B (zh) | 一种拍摄方法及设备 | |
CN117714861B (zh) | 图像处理方法及电子设备 | |
CN117714860B (zh) | 图像处理方法及电子设备 | |
CN115150543B (zh) | 拍摄方法、装置、电子设备及可读存储介质 | |
WO2022218216A1 (fr) | Procédé de traitement d'images et dispositif terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20924568 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20924568 Country of ref document: EP Kind code of ref document: A1 |