WO2023011302A1 - 拍摄方法及相关装置 - Google Patents
拍摄方法及相关装置 Download PDFInfo
- Publication number
- WO2023011302A1 WO2023011302A1 PCT/CN2022/108502 CN2022108502W WO2023011302A1 WO 2023011302 A1 WO2023011302 A1 WO 2023011302A1 CN 2022108502 W CN2022108502 W CN 2022108502W WO 2023011302 A1 WO2023011302 A1 WO 2023011302A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- depth
- subject
- aperture
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 230000004044 response Effects 0.000 claims abstract description 15
- 230000015654 memory Effects 0.000 claims description 28
- 230000035945 sensitivity Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 8
- 238000001514 detection method Methods 0.000 description 81
- 230000033001 locomotion Effects 0.000 description 31
- 238000012545 processing Methods 0.000 description 24
- 238000010586 diagram Methods 0.000 description 23
- 230000003287 optical effect Effects 0.000 description 23
- 230000011218 segmentation Effects 0.000 description 21
- 238000004422 calculation algorithm Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 19
- 238000003062 neural network model Methods 0.000 description 19
- 238000004891 communication Methods 0.000 description 17
- 238000007726 management method Methods 0.000 description 15
- 238000010295 mobile communication Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 10
- 230000005236 sound signal Effects 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 229920001621 AMOLED Polymers 0.000 description 4
- 238000013459 approach Methods 0.000 description 4
- 210000004027 cell Anatomy 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 210000000352 storage cell Anatomy 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003238 somatosensory effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/55—Optical parts specially adapted for electronic image sensors; Mounting thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/675—Focus control based on electronic image sensor signals comprising setting of focusing regions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/75—Circuitry for compensating brightness variation in the scene by influencing optical camera components
Definitions
- the present application relates to the field of electronic technology, in particular to a photographing method and related devices.
- the aperture is a component used to control the aperture of the lens to control the depth of field, the imaging quality of the lens, and the amount of light entering in cooperation with the shutter.
- the aperture is a component used to control the aperture of the lens to control the depth of field, the imaging quality of the lens, and the amount of light entering in cooperation with the shutter.
- the present application provides a shooting method and a related device, which can adaptively adjust the aperture gear based on the image collected by the camera, and greatly improve the user's shooting experience.
- the present application provides a shooting method, which is applied to a terminal device.
- the terminal device includes a camera configured with an adjustable aperture.
- the method includes: in response to the first instruction, the terminal device starts the camera to collect images based on the default aperture gear ; Detect whether the first image collected by the camera includes a prominent subject and a target person; when it is detected that the first image includes a prominent subject and a target person, based on the depth of the prominent subject and the depth of the target person, determine the target focus object and the target aperture gear; The camera focuses on the target focus object, and collects images based on the target aperture gear.
- the terminal device is configured with an adjustable aperture.
- the terminal device can adaptively switch the target focusing object based on the depth of the target person and the depth of the prominent subject in the image recently captured by the camera. And adjust the aperture gear, so that the camera can capture images with appropriate depth of field and brightness, and improve the focusing speed and focusing accuracy, which greatly improves the user's shooting experience.
- the target focus object and the target aperture gear are determined, specifically including: when the first image is detected The image includes a salient subject and a target person, and the salient subject and the target person are different objects, and when the depth of the salient subject and the depth of the target person meet the first preset condition, determine the salient subject as the target focus object, and determine based on the depth of the salient subject Target aperture gear; when it is detected that the first image includes a prominent subject and a target person, the prominent subject and the target person are different objects, and the depth of the prominent subject and the depth of the target person do not meet the first preset condition, determine that the target person is Target the subject to focus on and determine the target aperture position.
- the terminal device can adaptively switch the target focus object based on the depth of the target person and the depth of the salient subject, and then adjust the target aperture gear based on the target focus object; when the salient subject is the target focus object, it can be based on the salient The subject's depth-adaptive target aperture gear.
- the camera can also capture images with appropriate depth of field and brightness, improving the focusing speed and accuracy, thereby greatly improving the user's shooting experience.
- the above-mentioned first preset condition includes: the depth of the salient subject is smaller than the depth of the target person, and the depth difference between the depth of the salient subject and the depth of the target person is greater than a difference threshold.
- the camera is controlled to focus on the salient subject when the salient subject is closer to the camera and the distance between the salient subject and the target person is greater than the difference threshold.
- the above-mentioned terminal device stores a first corresponding relationship between depth and aperture gear
- determining the target aperture gear based on the depth of the prominent subject includes: determining the aperture corresponding to the depth of the prominent subject based on the first corresponding relationship
- the gear is the target aperture gear, the smaller the depth of the prominent subject, the smaller the target aperture gear.
- the terminal device when the first preset condition is satisfied, the terminal device adjusts the aperture gear based on the depth of the prominent subject, and the smaller the depth of the prominent subject, the smaller the aperture gear. In this way, when a prominent subject approaches the camera, the terminal device can reduce the aperture in time to increase the depth of field, avoiding the blurring of the prominent subject and the inability to focus quickly caused by the prominent subject moving out of the depth of field range.
- the above-mentioned first corresponding relationship includes the corresponding relationship between N aperture gears of the adjustable aperture and M continuous depth intervals, and one or more depth intervals in the M continuous depth intervals correspond to N apertures
- One of the aperture gears, N and M are positive integers greater than 1.
- the target aperture gear before acquiring images based on the target aperture gear, it further includes: determining the target exposure time and target sensitivity based on the target aperture gear, and the degree of change from the first value to the second value is less than the first preset range , wherein the first value is determined based on the current aperture gear, the current exposure time and the current sensitivity, the second value is determined based on the target aperture gear, target exposure time and target sensitivity; based on the target aperture gear Capture images at different positions, including: acquire images based on the target aperture gear, target exposure time and target sensitivity.
- the exposure time and sensitivity are adaptively adjusted, so that the change degree of the first value and the second value remains within the first preset range before and after the aperture gear adjustment.
- the first preset range is ⁇ 15%. In this way, it can be ensured that the image brightness of the image captured by the camera changes smoothly before and after the aperture gear is switched.
- the first image captured by the camera before detecting whether the first image captured by the camera includes prominent subjects and target persons, it also includes: detecting whether the current ambient light brightness is greater than the first brightness threshold; detecting whether the first image captured by the camera includes prominent
- the subject and the target person include: when it is detected that the brightness of the ambient light is greater than the first brightness threshold, detecting whether the first image captured by the camera includes a prominent subject and the target person.
- the target aperture gear is the default aperture gear.
- the terminal device adjusts the aperture gear based on the detected target focus object; in a non-high-brightness environment, it can maintain a larger default aperture gear, or further increase the aperture gear bits to ensure image brightness.
- the target focus object and the target aperture gear are determined, specifically including: when the first image is detected The image includes a prominent subject and a target person, and when the prominent subject and the target person are the same item, determine the prominent subject as the target focus object, and determine the target aperture gear based on the depth of the prominent subject; when the first image is detected to include the prominent subject and the target person, and the prominent subject and the target person are the same person, determine the target person as the target focus object, and determine the target aperture gear.
- the above shooting method further includes: when it is detected that the first image includes a prominent subject but does not include a target person, determining the prominent subject as the target focus object, and determining the target aperture gear based on the depth of the prominent subject; When it is detected that the first image includes a target person but does not include a prominent subject, determine that the target person is a target focusing object, and determine a target aperture gear.
- the aforementioned determination of the target aperture gear specifically includes: determining the target aperture gear as the default aperture gear.
- the above and determining the target aperture gear specifically include: determining the target aperture gear based on the current ambient light brightness.
- the aforementioned determination of the target aperture gear specifically includes: determining the target aperture gear based on the depth of the target person.
- the target aperture gear when the ambient light brightness is greater than the second brightness threshold, the target aperture gear is the first aperture gear; when the ambient light brightness is less than or equal to the third brightness threshold, the target aperture gear is the second aperture gear
- the default aperture position is smaller than the second aperture position and greater than the first aperture position.
- the present application provides a terminal device, including one or more processors and one or more memories.
- the one or more memories are coupled with one or more processors, the one or more memories are used to store computer program codes, the computer program codes include computer instructions, and when the one or more processors execute the computer instructions, the terminal device executes A shooting method in any possible implementation manner of any one of the above aspects.
- an embodiment of the present application provides a computer storage medium, including computer instructions.
- the terminal device is made to execute the photographing method in any possible implementation of any one of the above aspects.
- an embodiment of the present application provides a computer program product, which, when running on a computer, causes the computer to execute the photographing method in any possible implementation manner of any one of the above aspects.
- FIG. 1 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
- FIG. 2A is a schematic diagram of a scene of a webcast provided by an embodiment of the present application.
- FIG. 2B is a schematic diagram of a live broadcast interface provided by an embodiment of the present application.
- FIG. 3 is a method flowchart of a shooting method provided in an embodiment of the present application.
- FIG. 4A is a schematic diagram of a salient subject detection framework provided by an embodiment of the present application.
- Fig. 4B is a schematic diagram of a salient subject detection frame provided by the embodiment of the present application.
- Fig. 5A is a schematic diagram of a salient subject detection framework provided by the embodiment of the present application.
- FIG. 5B is a schematic diagram of a binary Mask map provided by the embodiment of the present application.
- FIG. 5C is a schematic diagram of a salient subject segmentation frame provided by the embodiment of the present application.
- FIG. 5D is a schematic diagram of depth prediction provided by the embodiment of the present application.
- FIG. 6A is a schematic diagram of a target person detection frame provided by an embodiment of the present application.
- FIG. 6B is a schematic diagram of another binary Mask map provided by the embodiment of the present application.
- FIG. 6C is a schematic diagram of a preset scene provided by the embodiment of the present application.
- FIGS. 7A to 7C are schematic diagrams of the live broadcast interface provided by the embodiment of the present application.
- FIG. 8 is a method flow chart of another shooting method provided in the embodiment of the present application.
- FIGS. 9A to 9C are schematic diagrams of the user interface for starting the snapshot mode provided by the embodiment of the present application.
- FIG. 10 is a method flowchart of another shooting method provided in the embodiment of the present application.
- FIG. 11 is a software system architecture diagram provided by the embodiment of the present application.
- FIG. 12 is another software system architecture diagram provided by the embodiment of the present application.
- first and second are used for descriptive purposes only, and cannot be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more.
- Aperture refers to the part on the camera used to control the aperture of the lens. It is used to control the depth of field, the imaging quality of the lens, and the amount of light entering in cooperation with the shutter.
- the f-number of the aperture is used to indicate the size of the aperture, and the f-number is equal to the focal length of the lens/the effective aperture diameter of the lens.
- the aperture gears include one or more of the following gears: f/1.0, f/1.4, f/2.0, f/2.8, f/4.0, f/5.6, f/8.0, f/11, f/16, f/22, f/32, f/44, f/64.
- Focus Including the point where parallel light is focused on the photosensitive element (or film) through the lens.
- Focal length refers to the distance of parallel light from the center of the lens of the lens to the focal point where the light gathers.
- Auto Focus refers to adjusting the image distance by moving the lens group in the camera lens back and forth, so that the subject can just fall on the photosensitive element, so that the image of the subject is clear.
- Depth of Field There is an allowable circle of confusion before and after the focal point. The distance between the two circles of confusion is called the focal depth.
- the foreground depth of field includes the sharp range in front of the focus point, and the background depth of field includes the sharp range behind the focus point.
- Important factors affecting depth of field include aperture size, focal length, and shooting distance.
- the structure of the terminal device 100 involved in the embodiment of the present application is introduced below.
- the terminal device 100 can be a terminal device equipped with iOS, Android, Microsoft or other operating systems, for example, the terminal device 100 can be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, a super mobile personal computer (ultra-mobile personal computer, UMPC), netbook, and cellular phone, personal digital assistant (PDA), augmented reality (augmented reality, AR) device, virtual reality (virtual reality, VR) device, artificial intelligence ( artificial intelligence (AI) devices, wearable devices, in-vehicle devices, smart home devices and/or smart city devices.
- PDA personal digital assistant
- augmented reality augmented reality, AR
- VR virtual reality
- AI artificial intelligence
- wearable devices wearable devices
- smart home devices smart home devices and/or smart city devices.
- smart city devices smart city devices.
- FIG. 1 shows a schematic structural diagram of a terminal device 100 .
- the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
- SIM subscriber identification module
- the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
- the structure shown in the embodiment of the present invention does not constitute a specific limitation on the terminal device 100 .
- the terminal device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
- the illustrated components can be realized in hardware, software or a combination of software and hardware.
- the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
- application processor application processor, AP
- modem processor graphics processing unit
- GPU graphics processing unit
- image signal processor image signal processor
- ISP image signal processor
- controller video codec
- digital signal processor digital signal processor
- baseband processor baseband processor
- neural network processor neural-network processing unit
- the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
- a memory may also be provided in the processor 110 for storing instructions and data.
- the memory in processor 110 is a cache memory.
- the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
- processor 110 may include one or more interfaces.
- the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
- I2C integrated circuit
- I2S integrated circuit built-in audio
- PCM pulse code modulation
- PCM pulse code modulation
- UART universal asynchronous transmitter
- MIPI mobile industry processor interface
- GPIO general-purpose input and output
- subscriber identity module subscriber identity module
- SIM subscriber identity module
- USB universal serial bus
- the charging management module 140 is configured to receive a charging input from a charger.
- the charger may be a wireless charger or a wired charger.
- the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
- the charging management module 140 may receive wireless charging input through the wireless charging coil of the terminal device 100 . While the charging management module 140 is charging the battery 142 , it can also supply power to the terminal device through the power management module 141 .
- the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
- the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 .
- the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
- the power management module 141 may also be disposed in the processor 110 .
- the power management module 141 and the charging management module 140 may also be set in the same device.
- the wireless communication function of the terminal device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
- Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in the terminal device 100 can be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
- Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
- the antenna may be used in conjunction with a tuning switch.
- the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the terminal device 100 .
- the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
- the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
- the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
- at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
- at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
- a modem processor may include a modulator and a demodulator.
- the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
- the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
- the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
- the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
- the modem processor may be a stand-alone device.
- the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
- the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2 , demodulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
- the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
- the antenna 1 of the terminal device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology.
- the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
- GSM global system for mobile communications
- GPRS general packet radio service
- code division multiple access code division multiple access
- CDMA broadband Code division multiple access
- WCDMA wideband code division multiple access
- time division code division multiple access time-division code division multiple access
- TD-SCDMA time-division code division multiple access
- the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
- GPS global positioning system
- GLONASS global navigation satellite system
- Beidou navigation satellite system beidou navigation satellite system
- BDS Beidou navigation satellite system
- QZSS quasi-zenith satellite system
- SBAS satellite based augmentation systems
- the terminal device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
- the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
- the display screen 194 is used to display images, videos and the like.
- the display screen 194 includes a display panel.
- the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
- the terminal device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
- the terminal device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
- the ISP is used for processing the data fed back by the camera 193 .
- the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
- ISP can also perform algorithm optimization on image noise, brightness, and skin color.
- ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
- the ISP may be located in the camera 193 .
- Camera 193 is used to capture still images or video.
- the object generates an optical image through the lens and projects it to the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
- the ISP outputs the digital image signal to the DSP for processing.
- DSP converts digital image signals into standard RGB, YUV and other image signals.
- the terminal device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
- the camera 193 is equipped with an adjustable aperture.
- the terminal device 100 collects images through the camera, it can automatically adjust the shooting parameters according to the preset strategy, so that the camera 193 can obtain images with appropriate depth of field and brightness, and improve the focusing speed.
- the shooting parameters include aperture gear, and may also include parameters such as sensitivity (ISO), exposure time (or shutter speed), and the like.
- the aperture configured by the camera 193 has H adjustable aperture gears, and the corresponding apertures of the H aperture gears are in order from large to small, and H is a positive integer greater than 1.
- the lens aperture can be adjusted to any value between the maximum lens aperture value and the minimum lens aperture value based on the minimum adjustment accuracy.
- Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the terminal device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
- Video codecs are used to compress or decompress digital video.
- the terminal device 100 may support one or more video codecs.
- the terminal device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
- the NPU is a neural-network (NN) computing processor.
- NN neural-network
- the NPU can quickly process input information and continuously learn by itself.
- Applications such as intelligent cognition of the terminal device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
- the internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (non-volatile memory, NVM).
- RAM random access memory
- NVM non-volatile memory
- Random access memory can include static random-access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (synchronous dynamic random access memory, SDRAM), double data rate synchronous Dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, such as the fifth generation DDR SDRAM is generally called DDR5SDRAM), etc.; non-volatile memory can include disk storage devices, flash memory (flash memory).
- SRAM static random-access memory
- DRAM dynamic random access memory
- SDRAM synchronous dynamic random access memory
- SDRAM synchronous dynamic random access memory
- DDR SDRAM double data rate synchronous dynamic random access memory
- non-volatile memory can include disk storage devices, flash memory (flash memory).
- flash memory can include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc.
- it can include single-level storage cells (single-level cell, SLC), multi-level storage cells (multi-level cell, MLC), triple-level cell (TLC), quad-level cell (QLC), etc.
- SLC single-level storage cells
- MLC multi-level storage cells
- TLC triple-level cell
- QLC quad-level cell
- UFS universal flash storage
- embedded multimedia memory card embedded multi media Card
- the random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of an operating system or other running programs, and can also be used to store data of users and application programs.
- the non-volatile memory can also store executable programs and data of users and application programs, etc., and can be loaded into the random access memory in advance for the processor 110 to directly read and write.
- the external memory interface 120 may be used to connect an external non-volatile memory, so as to expand the storage capacity of the terminal device 100 .
- the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external non-volatile memory.
- the terminal device 100 may implement an audio function through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, and an application processor. Such as music playback, recording, etc.
- the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
- the audio module 170 may also be used to encode and decode audio signals.
- the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
- Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
- the terminal device 100 can listen to music through the speaker 170A, or listen to hands-free calls.
- Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
- the receiver 170B can be placed close to the human ear to receive the voice.
- the microphone 170C also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
- the earphone interface 170D is used for connecting wired earphones.
- the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
- pressure sensor 180A may be disposed on display screen 194 .
- the gyroscope sensor 180B can be used to determine the motion posture of the terminal device 100 .
- the angular velocity of the terminal device 100 around three axes ie, x, y and z axes
- the gyro sensor 180B can be used for image stabilization.
- the gyro sensor 180B detects the shaking angle of the terminal device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the terminal device 100 through reverse motion to achieve anti-shake.
- the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
- the air pressure sensor 180C is used to measure air pressure.
- the acceleration sensor 180E can detect the acceleration of the terminal device 100 in various directions (generally three axes). When the terminal device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of terminal equipment, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
- the distance sensor 180F is used to measure the distance.
- the terminal device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the terminal device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
- Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
- the light emitting diodes may be infrared light emitting diodes.
- the terminal device 100 emits infrared light through the light emitting diode.
- the terminal device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the terminal device 100 . When insufficient reflected light is detected, the terminal device 100 may determine that there is no object near the terminal device 100 .
- the terminal device 100 can use the proximity light sensor 180G to detect that the user holds the terminal device 100 close to the ear to make a call, so as to automatically turn off the screen to save power.
- Proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
- the ambient light sensor 180L is used for sensing ambient light brightness.
- the terminal device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
- the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
- the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the terminal device 100 is in the pocket to prevent accidental touch.
- the fingerprint sensor 180H is used to collect fingerprints.
- the temperature sensor 180J is used to detect temperature.
- the terminal device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature processing strategy.
- the touch sensor 180K is also called “touch device”.
- the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
- the touch sensor 180K is used to detect a touch operation on or near it.
- the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
- Visual output related to the touch operation can be provided through the display screen 194 .
- the touch sensor 180K may also be disposed on the surface of the terminal device 100 , which is different from the position of the display screen 194 .
- the bone conduction sensor 180M can acquire vibration signals.
- the keys 190 include a power key, a volume key and the like.
- the key 190 may be a mechanical key. It can also be a touch button.
- the motor 191 can generate a vibrating reminder.
- the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, messages, notifications and the like.
- the SIM card interface 195 is used for connecting a SIM card.
- the embodiment of the present application provides a shooting method, and the shooting method is applied to scenarios where images are continuously collected by a camera, such as webcast, video call, photo preview, video recording and other scenarios.
- the terminal device 100 is configured with an adjustable aperture.
- the terminal device 100 can adaptively adjust the aperture size and other shooting parameters (such as ISO, exposure time, etc.), so that the camera can capture images with appropriate depth of field and brightness, and improve focusing speed and focusing accuracy.
- the shooting method provided by the embodiment of the present application will be described in detail below by taking a network live broadcast scene as an example.
- Webcasting refers to an entertainment form that broadcasts real-time images publicly on the Internet with the rise of online audio-visual platforms.
- the anchor can record and upload videos in real time through live broadcast devices such as mobile phones and tablets, and recommend food, daily necessities, etc. to the audience. You can interact with the anchor in real time through messages.
- the aperture of the live broadcast device is not adjustable, the focus point of the camera of the live broadcast device cannot be switched between the host and the item in a timely and accurate manner during the position change process of the host and the introduced item, which makes the camera of the live broadcast device unable to self-adapt Images with appropriate depth of field and brightness can be accurately collected.
- the existing live broadcast equipment focuses on the human face by default, and the host needs to block the human face before the live broadcast equipment can switch the focus point to the object to capture a clear image of the object.
- the anchor places the item in the core position of the screen (for example, the center and front of the screen), and the existing live broadcast equipment takes a long time to focus on the item.
- the anchor needs to manually focus to accurately focus on the item, and the user operation is cumbersome.
- implementing the photographing method provided by the embodiment of the present application in the live network scene can avoid the above problems and effectively improve user experience.
- FIG. 2A shows a schematic diagram of a scene of live broadcasting via a terminal device 100 according to an embodiment of the present application
- FIG. 2B shows a live broadcast interface 11 of the terminal device 100 .
- the live broadcast interface 11 includes a display area 201 , an input box 202 , a like control 203 , an avatar 204 , and a number of viewers 205 .
- the display area 201 is used to display images collected by the camera of the terminal device 100 in real time.
- the image displayed on the display area 201 includes the illustrated character 1 , item 1 and item 2 , and compared to the character 1 and the item 1 , the item 2 is in the foreground.
- the input box 202 is used to receive the message input by the user; the avatar 204 is used to display the avatar of the anchor; the number of viewers 205 is used to display the number of real-time viewers of the live broadcast.
- the terminal device 100 may collect live video images through a front camera or a rear camera, which is not specifically limited here.
- the live broadcast interface 11 shown in FIG. 2B is an exemplary user interface provided by the embodiment of the present application, and should not limit the present application. In some other embodiments, the live broadcast interface 11 may include more or less interface elements than those shown in the illustration.
- Fig. 3 shows a method flow chart of the photographing method provided by the embodiment of the present application, and the photographing method includes but not limited to steps S101 to S106. The method flow is described in detail below.
- the terminal device 100 starts the camera, and sets the aperture gear of the camera to a default aperture gear.
- the terminal device 100 collects and displays the image 1 through a camera.
- the aperture configured by the camera includes the aforementioned H adjustable aperture gears, and the default aperture gear is the aperture gear with a larger aperture among the above H aperture gears.
- the five aperture gears configured by the camera are f/1.4, f/2, f/2.8, f/4, f/6 in descending order, and the default aperture gear is f/2.
- the terminal device 100 receives the first instruction, and in response to the first instruction, the terminal device 100 starts the camera to capture an image (for example, image 1), and sets the aperture of the camera to the default aperture.
- the first instruction is used to trigger the video shooting function of a specific application (such as an instant messaging application, a camera application or a live broadcast application); the first instruction may be an instruction generated based on an input operation performed by the user, and the above input operation may be a user
- the touch operation input on the display screen (such as a click operation or a long press operation) can also be a non-contact operation such as a somatosensory operation or an air gesture, or an operation of inputting a user's voice command. There is no specific limitation here.
- the above-mentioned first instruction is used to start the live broadcast shooting function of the live broadcast application installed on the terminal device 100; in response to the above-mentioned input operation, the terminal device 100 starts the camera, and the camera collects images based on the default aperture gear with a larger aperture image, and display the captured image, such as image 1, in the display area 201 of the shooting interface 11.
- the depth of field corresponding to the image 1 displayed in the display area 201 after starting the camera is relatively shallow, which causes the object 2 in the distant view in the image 1 to be blurred, Visually blurred.
- the camera involved in this embodiment of the present application may be a front camera or a rear camera, which is not specifically limited here.
- the first image involved in this application may be Image 1 .
- the terminal device 100 determines whether the current ambient light brightness is greater than the brightness threshold 1; if the current ambient light brightness is greater than the brightness threshold 1, execute S104.
- the terminal device 100 when the current ambient light brightness is less than or equal to the brightness threshold 1, the terminal device 100 keeps the aperture gear as the default aperture gear.
- the terminal device 100 when the ambient light brightness is less than or equal to the brightness threshold 1 and greater than the brightness threshold 2, the terminal device 100 maintains the aperture gear as the default aperture gear; when the ambient light brightness is less than or equal to the brightness threshold 2, the terminal device 100 increases the The aperture gear is aperture gear 1.
- the brightness threshold 2 is smaller than the brightness threshold 1, and the lens aperture corresponding to the aperture gear 1 is larger than the lens aperture corresponding to the default aperture gear.
- the default aperture is f/2, and aperture 1 is f/1.4.
- the terminal device 100 executes step S104 and some steps from S105 to S111 to determine the target object to be focused on, and combined with the target object to be focused on
- the depth further determines how to adjust the aperture gear; when the ambient light brightness is less than the brightness threshold 2, the terminal device 100 is in a nighttime environment, and the terminal device 100 increases the amount of incoming light by increasing the aperture gear; the ambient light brightness is less than or equal to the brightness threshold 1 and greater than the brightness threshold 2 , the terminal device 100 is in a non-bright and non-nighttime environment, and the terminal device 100 continues to keep the aperture gear as the default aperture gear.
- the terminal device 100 may detect the current ambient light brightness through an ambient light sensor. In some embodiments, the terminal device 100 may acquire the correspondence between the image brightness and the ambient light brightness, and then may determine the current ambient light brightness through the image brightness of the image 1 .
- the embodiment of the present application does not specifically limit the acquisition of ambient light brightness.
- step S103 is optional. In some embodiments, after step S102, the terminal device 100 directly executes S104.
- the terminal device 100 detects the target person and the salient subject in the image 1, and acquires the depth of the target person and the depth of the salient subject.
- a prominent subject in an image refers to an object in the image that the user's line of sight is most likely to focus on when the user sees the image, that is, an object in the image that the user is most interested in.
- step S104A and step S104B may be included.
- the terminal device 100 detects the salient subject in the image 1 captured by the camera, and determines the area 1 where the salient subject is located in the image 1 .
- FIG. 4A shows a salient subject detection framework, which includes a preprocessing module and a salient subject detection module.
- the terminal device 100 inputs the RGB image (such as image 1) collected by the camera into the pre-processing module, and the pre-processing module is used to down-sample and crop the above-mentioned RGB image; the terminal device 100 inputs the pre-processed RGB image output by the pre-processing module
- the salient subject detection module, the salient subject detection module is used to identify salient subjects in the input RGB image by using the neural network model 1, and outputs a salient subject detection frame corresponding to the salient subject in a preset shape, and the salient subject detection frame is used to indicate the salient subject Location 1 in image 1.
- the aforementioned preset shape may be a preset rectangle, ellipse, or circle, etc., which are not specifically limited here.
- the terminal device 100 detects a salient subject in image 1 through the salient subject detection framework shown in FIG. 4A , and outputs a rectangular salient subject detection frame corresponding to the salient subject (that is, item 1).
- the subject detection frame is used to indicate the area where the item 1 is located in FIG. 1 .
- the terminal device 100 also takes the past frame information of the salient subject detection module (that is, the input image of the previous frame a and the corresponding output result, a is a positive integer) as an input signal, and then inputs it into the neural network again.
- Network model 1 so as to realize the salient subject detection of the current frame image collected by the camera, and at the same time propagate the detection frame of the past frame image collected by the camera, so as to make the detection results of continuous multi-frame images more stable.
- the terminal device 100 may use the trained neural network model 1 for salient subject detection to obtain the salient subject detection frame in the image 1 .
- the following is a brief introduction to the training process of the neural network model 1.
- the terminal device 100 can effectively and continuously track the salient subject in the video image collected by the camera.
- FIG. 5A shows another salient subject detection framework, which also includes a preprocessing module and a salient subject detection module.
- the salient subject detection module shown in Figure 5A uses the trained neural network model 2 to detect salient subjects.
- the input of the neural network model 2 is the preprocessed RGB image, and the output preprocessed The binary Mask image corresponding to the processed RGB image.
- each pixel in the binary Mask map corresponds to a first value (such as 0) or a second value (such as 1), and the area where the pixel value is the second value is the area where the salient subject is located.
- FIG. 5B shows the binary Mask map corresponding to FIG. 1 , the area where the pixel value in FIG. 1 is the second value is the area where item 1 is located, and item 1 is the salient subject in FIG. 1 .
- the salient subject detection module determines the edge of the area where the salient subject is located based on the binary Mask image output by the neural network model 2, and uses the closed edge line of the salient subject as the salient subject segmentation frame, and the salient subject segmentation frame is used for Indicates the region 1 where the salient subject is located.
- the shape of the salient subject segmentation box is not fixed, usually irregular.
- FIG. 5C shows the salient subject segmentation frame of the salient subject (ie item 1 ) in FIG. 1 .
- the output of the neural network model 2 is the salient subject segmentation frame of the salient subject of the input RGB image.
- the terminal device 100 takes the past frame information of the salient subject detection module (that is, the input image of the previous frame a and the corresponding output result) as an input signal, and then inputs the neural network model again 2. To improve the stability of the test results. Specifically, reference may be made to the relevant description in FIG. 4A , which will not be repeated here.
- the region where the salient subject is located can be separated from other regions in FIG. 1 along the edge of the salient subject.
- the terminal device 100 when the terminal device 100 indicates the region 1 through the salient subject detection frame, the terminal device 100 can represent the position of the region 1 in the image 1 through the coordinates and the size of the salient subject detection frame.
- the terminal device 100 when the salient subject detection frame is rectangular, the coordinates of the salient subject detection frame are the upper left corner coordinates (or the lower left corner coordinates, the upper right corner coordinates, and the lower right corner coordinates), and the size of the salient subject detection frame is the width of the salient subject detection frame and long; when the salient subject detection frame is circular, the coordinates of the salient subject detection frame are the center coordinates of the salient subject detection frame, and the size of the salient subject detection frame is the radius of the salient subject detection frame.
- the terminal device 100 indicates the region 1 through the salient subject segmentation frame
- the terminal device 100 may represent the position of the region 1 in the image 1 through the coordinates of each pixel on the salient subject segmentation frame.
- the salient subject detection framework shown in Figure 4A and Figure 5A does not detect the salient subject in Figure 1, the salient subject detection framework does not output a result, or outputs a preset symbol, which is used to indicate that no The salient subjects of Figure 1 are detected.
- the preprocessing modules shown in FIG. 4A and FIG. 5A are optional, and the terminal device 100 can also directly use the salient subject detection module to detect salient subjects in the input image (for example, image 1 ) of the framework.
- the terminal device 100 determines the depth of the salient subject based on the region 1 of the image 1 .
- the terminal device 100 stores a corresponding relationship between phase difference (Phase Difference, PD) and depth (ie, object distance).
- the terminal device 100 acquires the PD value corresponding to the region 1 of the image 1, and then determines that the depth corresponding to the PD value is the depth of the salient subject.
- the pixel sensor of the camera of the terminal device 100 has a phase detection function, which can detect the phase difference between the left pixel and the right pixel of each pixel in area 1, and then the area can be determined based on the phase difference of each pixel in area 1.
- 1 corresponds to the PD value.
- the PD value corresponding to area 1 is equal to the average value of the phase difference of each pixel in area 1.
- the terminal device 100 acquires the depth image corresponding to image 1; the terminal device 100 acquires the depth corresponding to the pixels in area 1 of image 1 based on the depth image, and then determines the depth of the salient subject based on the depth corresponding to the pixels in area 1. depth.
- the depth of the salient subject represents the distance between the salient subject and the lens in the actual environment.
- the depth corresponding to pixel 1 in area 1 where the salient subject is located indicates: the distance between the position corresponding to pixel 1 on the salient subject and the lens in the actual environment.
- the pixel value of each pixel in the depth image is used to represent the corresponding depth of the pixel. It can be understood that the depth corresponding to pixel 1 in image 1 is the pixel value of the pixel corresponding to pixel 1 in the depth image.
- the resolution of the depth image is equal to the resolution of the image 1, and the pixels of the depth image correspond to the pixels of the image 1 one by one.
- the resolution of the depth image is smaller than the resolution of the image 1, and one pixel of the depth image corresponds to multiple pixels of the image 1.
- the terminal device 100 determines the depth of the salient subject as an average or weighted average of depths corresponding to all pixels in area 1 . In one implementation, the terminal device 100 determines that the depth of the salient subject is the average or weighted average of the depths corresponding to all the pixels in the preset area of area 1; Areas of set size and preset shape. In one implementation, the terminal device 100 divides the depth into N consecutive depth intervals; based on the depth corresponding to each pixel in area 1, divides the pixel into the corresponding depth interval; the terminal device 100 determines the pixel in area 1 Depth interval 1 with the most distribution; the terminal device 100 determines that the depth of the salient subject is the middle value of depth interval 1 .
- the depth of the salient subject may also be referred to as the depth of area 1.
- the terminal device 100 uses a camera to capture an image (for example, image 1), it uses a camera configured with a depth measurement device to capture a depth image corresponding to the above image.
- the depth measurement device may be a Time of Flight (TOF) device, such as ITOF or DTOF; the depth measurement device may also be other types of devices, which are not specifically limited here.
- TOF Time of Flight
- the terminal device 100 uses the camera to collect image 1, it uses the TOF device to continuously send light pulses to the subject area, and then uses the TOF device to receive the light pulses returned from the subject, and determines the round-trip flight time of the light pulses to determine The distance between all subjects within the shooting range and the lens is calculated to obtain the depth image corresponding to image 1.
- the camera for capturing the image 1 and the camera for capturing the depth image may be the same camera or different cameras, which are not specifically limited here.
- the terminal device 100 collects the image 1 with the camera, it inputs the image 1 into the trained depth prediction neural network model 3, and the neural network model 3 outputs a depth image corresponding to the image 1.
- the salient subject detection framework shown in FIG. 5A may further include a depth prediction module, which is used to use the neural network model 3 to obtain a depth image corresponding to the input image.
- the embodiment of the present application may also obtain the depth image corresponding to the image 1 in other manners, which are not specifically limited here.
- step S104C and step S104D may be included.
- the terminal device 100 detects the target person in the image 1 captured by the camera, and determines the area 2 where the target person is located in the image 1 .
- the terminal device 100 first preprocesses the image captured by the camera, and then detects the target person in the preprocessed image 1 .
- the terminal device 100 uses a face detection algorithm (such as a trained neural network model 4 for face detection) to identify the face of the target person in image 1 (for ease of description, the face of the target person is referred to as is the target face), obtain the target person detection frame corresponding to the preset shape (such as rectangle, ellipse or circle, etc.) of the target person face, and the target person detection frame is used to indicate the area 2 where the target person is in the image 1 .
- the image 1 includes multiple human faces, among the multiple human faces, the target human face has a larger area, a smaller depth of the target human face, and/or the target human face is closer to the center of the image 1 .
- the image 1 includes multiple faces
- the terminal device 100 uses a face detection algorithm to identify the area where each face in the image 1 is located, and then the terminal device 100 based on the area of each face, the area of each face
- the depth and/or the position of each human face is used to determine the target human face among the above-mentioned plurality of human faces.
- the terminal device 100 may determine the depth of each face. Specifically, reference may be made to the manner of determining the depth of the salient subject in the foregoing embodiments, which will not be repeated here.
- the face with the largest area among the multiple faces is determined as the target face.
- the face among the plurality of faces that is closest to the center of the image 1 is determined as the target face.
- the weights of the two factors of area and depth are set, and the face with the largest weighted value of the above two factors among the multiple faces is determined as the target face.
- the weight of the area A of the face is a
- the weight of the depth B of the face is b
- the face with the largest (a*A-b*B) is determined as the target face.
- the image 1 captured by the camera includes person 1 and person 2, and the terminal device 100 uses a face recognition algorithm to recognize the face of person 1 and the face of person 2 in image 1, and then based on the two For the area and depth of a person's face, determine the face of person 1 whose weighted value of the area of the face and the depth of the face is larger as the target face.
- the terminal device 100 uses a person recognition algorithm (for example, a trained neural network model 5 for person detection) to detect a target person in image 1, and obtains a binary Mask image corresponding to image 1.
- a person recognition algorithm for example, a trained neural network model 5 for person detection
- each pixel in the binary Mask map corresponds to a first value (such as 0) or a second value (such as 0), and the region where the pixel value is the second value is the region where the target person is located.
- FIG. 6B shows the binary Mask map corresponding to FIG. 1 , the area where the pixel value in FIG. 1 is the second value is the area where person 1 is located, and person 1 is the target person in FIG. 1 .
- the image 1 includes a plurality of persons, among the plurality of persons, the area of the target person is larger, the depth of the target person's face is smaller, and/or the target person is closer to the center of the image 1 .
- the terminal device 100 determines the edge of the area where the target person is based on the binary Mask map corresponding to the target person, and uses the closed edge line of the target person as the segmentation frame of the target person, and the segmentation frame of the target person is used to indicate the target person The area where 2 is located.
- the shape of the target person segmentation frame is not fixed, usually irregular.
- FIG. 5C shows a target person segmentation frame of the target person (namely person 1 ) in FIG. 1 .
- the output of the neural network model 5 is the target person segmentation frame of the target person in the image 1 .
- the image 1 includes multiple characters
- the terminal device 100 identifies the multiple characters using a character recognition algorithm, and then determines a target character among the multiple characters based on the area, depth and/or position of each character. Specifically, reference may be made to the above implementation manner of determining a target face among multiple faces, which will not be repeated here.
- the terminal device 100 uses the salient subject detection frame to indicate the area 1 where the salient subject is located, and uses the target person detection frame to indicate the area 2 where the target person is located.
- the target person detection frame and the salient subject detection frame are detection frames of preset shapes.
- the terminal device 100 uses a prominent subject segmentation frame to indicate the area 1 where the prominent subject is located, and uses a target person segmentation frame to indicate the area 2 where the target person is located.
- the terminal device 100 when the terminal device 100 displays the image captured by the camera, the recognized target person detection frame and salient subject detection frame (or target person segmentation frame and salient subject segmentation frame) can be displayed.
- the terminal device 100 does not need to display the recognized target person detection frame and salient subject detection frame (or target person segmentation frame and salient subject segmentation frame); For example, the detection frame is only used to determine the area where the salient subject is located in the image 1, so that the terminal device 100 can determine the depth of the area where the salient subject is located.
- the terminal device 100 determines the depth of the target person based on the area 2 of the image 1 .
- step S104B the implementation manner of step S104B, which will not be repeated here.
- the depth of the target person may also be referred to as the depth of area 2.
- the terminal device 100 when the terminal device 100 indicates region 2 through the target person detection frame, the terminal device 100 can represent the position of region 1 in image 1 through the coordinates and size of the target person detection frame; When the terminal device 100 indicates the region 2 through the target person segmentation frame, the terminal device 100 may represent the position of the region 2 in the image 1 through the coordinates of each pixel on the target person segmentation frame.
- Step S104A detecting target persons
- step S104C detecting salient subjects
- the terminal device 100 detects a target person and a prominent subject in the image 1, and the target person and the prominent subject are different objects, determine whether the depth of the target person and the depth of the prominent subject satisfy a first preset condition.
- the first preset condition is that the depth of the target person is smaller than the depth of the salient subject, and the depth difference between the depth of the target person and the depth of the salient subject is greater than a difference threshold. In some embodiments, the first preset condition is that the depth of the target person is smaller than the depth of the salient subject, the depth difference between the depth of the target person and the depth of the salient subject is greater than a difference threshold, and the depth of the salient subject is smaller than the preset depth. In the embodiment of the present application, when the first preset condition is met, the terminal device 100 determines that a prominent subject enters a macro shooting scene.
- the terminal device 100 determines that the salient subject is the target focus object, and executes S106; when the first preset condition is not met, the terminal device 100 100 Determine that the target person is the target focus object, and execute S107.
- the terminal device 100 adjusts the gear of the aperture based on the depth of the salient subject, and focuses the camera on the salient subject.
- the aperture gears of the camera include the aforementioned H aperture gears, and the default aperture gear is the ith gear among the aforementioned H aperture gears.
- the terminal device 100 divides the depth into H-i consecutive depth intervals from large to small, and the last H-i aperture positions among the above-mentioned H aperture positions correspond to the above-mentioned H-i depth intervals one-to-one.
- the terminal device 100 adjusts the aperture gear based on the depth of the conspicuous subject, and the smaller the depth of the conspicuous subject, the smaller the aperture gear after adjustment.
- the aperture gear of the camera includes five gears of f/1.4, f/2, f/2.8, f/4 and f/6, and the default aperture gear is f/2; when the depth of a significant subject is greater than When the depth threshold is 1 (for example, 60cm), keep the aperture gear as the default aperture gear; when the depth of the significant subject is less than or equal to depth threshold 1, and greater than depth threshold 2 (for example, 40cm), reduce the aperture gear to f/2.8 ;When the depth of the conspicuous subject is less than or equal to the depth threshold 2 and greater than the depth threshold 3 (for example, 30cm), reduce the aperture gear to f/4; when the depth of the conspicuous subject is less than or equal to the depth threshold 3, reduce the aperture gear to f/6. It can be understood that the more adjustable aperture gears, the finer the division of depth intervals can be.
- the terminal device 100 when the terminal device 100 adjusts the aperture gear, it adjusts the exposure time and ISO accordingly, so that the value of (exposure time*ISO/the f value of the aperture gear) changes before and after the aperture gear adjustment
- the degree is kept within a first preset range, for example, the first preset range is ⁇ 15%. In this way, it can be ensured that the image brightness of the image captured by the camera changes smoothly before and after the aperture gear is switched.
- the terminal device uses an AF algorithm to focus the camera on the salient subject.
- the terminal device 100 stores the corresponding relationship between the depth and the focus position of the focus motor, and the terminal device 100 determines the target focus position of the focus motor according to the depth of the preset area of the area 1 where the prominent subject is located, and then drives the focus motor to the target position. Focus position, so as to realize the camera focusing on a prominent subject.
- the position of the aforementioned preset area is determined according to the position of area 1 .
- the aforementioned preset area may be an area of a preset size and a preset shape at the center of the area 1 .
- the aforementioned preset area is the entire area of area 1, and the depth of the preset area of area 1 is the depth of the salient subject.
- the terminal device 100 adjusts the aperture gear based on the depth of the salient subject. The smaller the depth of the salient subject, the smaller the aperture gear. In this way, when a prominent subject approaches the camera, the terminal device 100 can reduce the aperture in time to increase the depth of field, avoid blurring the prominent subject caused by the prominent subject moving out of the depth of field range, and further improve the focusing speed of the prominent subject.
- step S107 includes three possible implementation manners of S107A, S107B and S107C.
- the terminal device 100 adjusts the aperture gear to the default aperture gear, and focuses the camera on the target person.
- the terminal device 100 adjusts the aperture gear based on the ambient light brightness, and focuses the camera on the target person.
- step S107B when the ambient light brightness is greater than the preset threshold 1, the aperture gear is adjusted down based on the ambient light brightness, and the exposure time and ISO are adaptively adjusted, so that before and after the aperture gear adjustment , the change degree of the value of (exposure time*ISO/aperture gear f value) is kept within the first preset range.
- the aperture gear is adjusted down based on the brightness of the ambient light, the ISO is kept unchanged, and the exposure time is appropriately increased.
- the aperture gear of the camera includes five gears of f/1.4, f/2, f/2.8, f/4 and f/6, the default aperture gear is f/2, and the ambient light brightness is greater than the preset When the threshold is 1, reduce the aperture stop to f/2.8.
- the terminal device 100 adjusts the gear of the aperture based on the depth of the target person, and focuses the camera on a prominent subject.
- the depth of the target person has a linear relationship with the aperture gear, and the smaller the depth of the target person is, the smaller the aperture gear after adjustment is. Specifically, reference may be made to the corresponding relationship between the depth of the prominent subject and the aperture position in step S106, which will not be repeated here.
- the terminal device 100 executes S107C.
- the human face is usually relatively close to the camera, for example, the above preset scene is a makeup scene.
- the terminal device 100 may determine whether it is currently in a preset scene by identifying images collected by the camera. Exemplarily, referring to FIG. 6C , when the terminal device 100 recognizes that the image 1 includes a human face and cosmetics, and the depth of the human face is less than a preset depth, it determines that the terminal device 100 is currently in a makeup scene.
- the shooting mode of the terminal device 100 includes a preset scene mode (for example, makeup mode), and when the terminal device 100 is shooting in the preset scene mode, it is determined that the terminal device 100 is currently in the preset scene.
- the terminal device 100 uses an AF algorithm to focus the camera on the target person.
- the terminal device 100 stores the corresponding relationship between the depth and the focus position of the focus motor, and the terminal device 100 determines the target focus position of the focus motor according to the depth of the preset area in the area 2 where the target person is located, and then drives the focus motor to The target focus position, so that the camera can focus on the target person.
- the position of the preset area is determined according to the position of area 2 .
- the aforementioned preset area may be an area of a preset size and a preset shape at the center of the area 2.
- the aforementioned preset area is the entire area of area 2.
- a character 1 is holding an item 1, and gradually approaches the item 1 to the terminal device 100, and the terminal device 100 detects a prominent subject in the image captured by the camera (that is, the item 1) and Target Person (i.e. Person 1).
- the terminal device 100 detects a prominent subject in the image captured by the camera (that is, the item 1) and Target Person (i.e. Person 1).
- the terminal device 100 will focus the camera on the target person and maintain the default aperture gear, and the object 2 in the foreground will be blurred at this time.
- the terminal device 100 reduces the aperture gear based on the depth of the prominent subject.
- the aperture gear decreases, the depth of field of the images shown in FIG. 7B and FIG. 7C increases, and the object 2 in the foreground gradually becomes clearer.
- S108 is further included after step S104.
- the terminal device 100 determines that the prominent subject is the target focus object, and executes S106.
- the terminal device 100 when the terminal device 100 detects a target person and a salient subject in image 1, the target person and the salient subject are the same item, and the depth of the salient subject is less than depth threshold 1, the terminal device 100 determines that the salient subject is the target To focus on the object, execute S106.
- step S109 is further included after step S104.
- the terminal device 100 determines that the target person is the target focus object, and executes S107.
- S110 is further included after step S104.
- the terminal device 100 determines that the prominent subject is the target focus object, and executes S106.
- S111 is further included after step S104.
- the terminal device 100 determines that the target person is the target focus object, and executes S107.
- the terminal device 100 can adaptively adjust the target focus object and the aperture gear, so that Because the camera captures images with appropriate depth of field and brightness at any time. In addition, inaccurate focus and untimely focus caused by the subject moving within the short range of the terminal device 100 are avoided, thereby effectively improving user experience.
- steps S112 to S113 are further included after step S102 .
- the terminal device 100 receives a focusing operation performed on the image 1 by the user.
- the terminal device 100 determines a focus frame of the image 1 .
- the terminal device 100 determines a focus frame with a preset shape (eg, square) and a preset size, and coordinate 1 is located at the center of the focus frame.
- a preset shape eg, square
- the terminal device 100 determines whether the current ambient light brightness is greater than the brightness threshold 1; if the current ambient light brightness is greater than the brightness threshold 1, execute S115.
- the terminal device 100 when the current ambient light brightness is less than or equal to the brightness threshold 1, the terminal device 100 keeps the aperture gear as the default aperture gear.
- the terminal device 100 when the ambient light brightness is less than or equal to the brightness threshold 1 and greater than the brightness threshold 2, the terminal device 100 maintains the aperture gear as the default aperture gear; when the ambient light brightness is less than or equal to the brightness threshold 2, the terminal device 100 increases the The aperture gear is aperture gear 1.
- the terminal device 100 adjusts the aperture gear based on the depth of the focus frame, and focuses the camera on the subject within the focus frame.
- the terminal device 100 may acquire the depth corresponding to the pixels in the focus frame through the depth image. In an implementation manner, the terminal device 100 determines the depth of the focus frame as an average or weighted average of depths corresponding to all pixels in the focus frame. In one implementation, the terminal device 100 determines that the depth of the focus frame is the average or weighted average of the depths corresponding to all the pixels in the preset area of the focus frame; Areas of preset size and shape.
- the depth is divided into N consecutive depth intervals; based on the depth corresponding to each pixel in the focus frame, the pixel is divided into corresponding depth intervals; the terminal device 100 determines the pixel with the most distribution of the focus frame Depth interval 2; the terminal device 100 determines that the depth of the focus frame is an intermediate value of the depth interval 2.
- the present application also provides a shooting method, the method includes step S301 to step S304.
- the terminal device starts the camera to collect images based on a default aperture gear.
- the first image may be the image 1 in the foregoing embodiment.
- the camera focuses on the target focus object, and collects images based on the target aperture gear.
- determining the target focus object and the target aperture position based on the depth of the prominent subject and the depth of the target person specifically includes: when the first image is detected Including the salient subject and the target person, when the salient subject and the target person are different objects, and the depth of the salient subject and the depth of the target person meet the first preset condition, determine the salient subject as the target focus object, and determine the target based on the depth of the salient subject Aperture gear; when it is detected that the first image includes a prominent subject and a target person, the prominent subject and the target person are different objects, and the depth of the prominent subject and the depth of the target person do not meet the first preset condition, determine that the target person is the target Focus on the subject and determine the target aperture.
- the above-mentioned first preset condition includes: the depth of the salient subject is smaller than the depth of the target person, and the depth difference between the depth of the salient subject and the depth of the target person is greater than a difference threshold.
- the above-mentioned terminal device stores a first corresponding relationship between depth and aperture gear
- determining the target aperture gear based on the depth of the prominent subject includes: determining the aperture gear corresponding to the depth of the prominent subject based on the first corresponding relationship The position is the target aperture position, the smaller the depth of the prominent subject, the smaller the target aperture position.
- the above-mentioned first corresponding relationship includes the corresponding relationship between N aperture gears of the adjustable aperture and M continuous depth intervals, and one or more depth intervals in the M continuous depth intervals correspond to N aperture gears
- N and M are positive integers greater than 1.
- the target aperture gear before acquiring the image based on the target aperture gear, it further includes: determining the target exposure time and the target sensitivity based on the target aperture gear, the degree of change from the first value to the second value is less than the first preset range, Wherein, the first value is determined based on the current aperture gear, the current exposure time and the current sensitivity, and the second value is determined based on the target aperture gear, target exposure time and target sensitivity; the above is based on the target aperture gear Capture images at different positions, including: acquire images based on the target aperture gear, target exposure time and target sensitivity.
- the first value is equal to (current exposure time*current ISO/f value of current aperture), and the second value is equal to (target exposure time*target ISO/f value of target aperture).
- determining the target focus object and the target aperture position based on the depth of the prominent subject and the depth of the target person specifically includes: when the first image is detected Include a prominent subject and a target person, and when the prominent subject and the target person are the same item, determine the prominent subject as the target focus object, and determine the target aperture gear based on the depth of the prominent subject; when the first image is detected to include the prominent subject and the target person , and when the prominent subject and the target person are the same person, determine the target person as the target focus object, and determine the target aperture gear.
- the above shooting method further includes: when it is detected that the first image includes a prominent subject but does not include the target person, determining the prominent subject as the target focus object, and determining the target aperture gear based on the depth of the prominent subject; When the first image includes the target person but does not include a prominent subject, determine the target person as the target focus object, and determine the target aperture gear.
- the above and determining the target aperture gear specifically includes: determining the target aperture gear as the default aperture gear.
- the above-mentioned determining the target aperture gear specifically includes: determining the target aperture gear based on the current ambient light brightness. In some embodiments, when the ambient light brightness is greater than the second brightness threshold, the target aperture gear is the first aperture gear; when the ambient light brightness is less than or equal to the third brightness threshold, the target aperture gear is the second aperture gear , the default aperture is smaller than the second aperture and larger than the first aperture. Specifically, reference may also be made to the description of the above-mentioned related embodiments of S107B, which will not be repeated here.
- the above-mentioned determining the target aperture gear specifically includes: determining the target aperture gear based on the depth of the target person. Specifically, reference may be made to the description of the above-mentioned related embodiments of S107C, which will not be repeated here.
- the first image captured by the camera before detecting whether the first image captured by the camera includes a prominent subject and a target person, it also includes: detecting whether the current ambient light brightness is greater than the first brightness threshold; detecting whether the first image captured by the camera includes a prominent subject and the target person, including: when it is detected that the brightness of the ambient light is greater than the first brightness threshold, detecting whether the first image captured by the camera includes a prominent subject and the target person.
- after detecting whether the current ambient light brightness is greater than the first brightness threshold it further includes: when it is detected that the ambient light brightness is smaller than the first brightness threshold, determining that the target aperture gear is the default aperture gear.
- the first brightness threshold may be the aforementioned brightness threshold 1 .
- the embodiment of the present application also provides a shooting method.
- the terminal device 100 can automatically adjust the aperture gear based on the ambient light brightness and the movement speed of the target object in the image captured by the camera. bit, so that the camera can capture images with appropriate depth of field and brightness, and improve focusing speed and accuracy.
- FIG. 9A to FIG. 9C show user interface diagrams for starting the snapshot mode.
- FIG. 9A shows the main interface 12 for displaying the application programs installed in the terminal device 100 .
- the main interface 12 may include: a status bar 301 , a calendar indicator 302 , a weather indicator 303 , a tray with commonly used application icons 304 , and other application icons 305 .
- the tray 304 with commonly used application program icons can display: phone icon, contact icon, text message icon, camera icon 304A.
- Other application icons 305 can display more application icons.
- the main interface 12 may also include a page indicator 306 . Icons of other application programs may be distributed on multiple pages, and the page indicator 306 may be used to indicate the application program on which page the user is currently viewing. Users can swipe the area of other application icons left and right to view application icons in other pages.
- FIG. 9A only exemplarily shows the main interface on the terminal device 100, and should not be construed as limiting the embodiment of the present application.
- the camera icon 304A may receive a user's input operation (such as a long press operation), and in response to the above input operation, the terminal device 100 displays the service card 307 shown in FIG. 9B , the service card 307 includes one or more shortcut function controls of the camera application, For example, portrait function control, snapshot function control 307A, video recording function control and self-timer function control.
- a user's input operation such as a long press operation
- the terminal device 100 displays the service card 307 shown in FIG. 9B
- the service card 307 includes one or more shortcut function controls of the camera application, For example, portrait function control, snapshot function control 307A, video recording function control and self-timer function control.
- the capture function control 307A may receive a user's input operation (such as a touch operation), and in response to the above input operation, the terminal device 100 displays the capture interface 13 shown in FIG. 9C .
- the shooting interface 13 may include: a shooting control 401 , an album control 402 , a camera switch control 403 , a shooting mode 404 , a display area 405 , a setting icon 406 , and a fill light control 407 . in:
- the shooting control 401 can receive user input operations (such as touch operations).
- the terminal device 100 uses the camera to capture images in the capture mode, and performs image processing on the images collected by the camera, and saves the processed images as Snap an image.
- the album control 402 is used to trigger the terminal device 100 to display the user interface of the album application.
- the camera switching control 403 is used to switch the camera used for shooting.
- the shooting mode 404 may include: a night scene mode, a professional mode, a photographing mode, a video recording mode, a portrait mode, a snapshot mode 404A, and the like. Any shooting mode in the above-mentioned shooting modes 404 may receive a user operation (such as a touch operation), and in response to the detected user operation, the terminal device 100 may display a shooting interface corresponding to the shooting mode.
- a user operation such as a touch operation
- the current shooting mode is the snapshot mode
- the display area 205 is used to display a preview image captured by the camera of the terminal device 100 in the snapshot mode.
- the user can also start the capture mode of the camera application by clicking the capture mode 404A shown in FIG. 9C or a voice command, which is not specifically limited here.
- FIG. 10 shows a method flow chart of another shooting method provided by the embodiment of the present application, and the shooting method includes but not limited to steps S201 to S205.
- the shooting method is described in detail below.
- the terminal device 100 starts the snapshot mode, and sets the aperture gear of the camera as the default aperture gear.
- the terminal device 100 starts the snapshot mode and displays the shooting interface 13 shown in FIG. 9C .
- the preview image displayed in the display area 405 of the shooting interface 13 is actually captured by the camera based on the default aperture gear.
- the camera involved in this embodiment of the present application may be a front camera or a rear camera, which is not specifically limited here.
- the terminal device 100 detects a target object in an image collected by a camera.
- the terminal device 100 collects and displays the image 2 through the camera; the terminal device 100 receives the user's input operation 1 on the image 2, and in response to the above input operation 1, the terminal device 100 based on the input operation 1 acts on the coordinate 2 on the image 2 to determine the target object selected by the user in the image 2, and the area where the target object is located in the image 2 includes the above-mentioned coordinate 2.
- the terminal device 100 uses a preset detection algorithm to detect the target object in each frame of image captured by the camera.
- the target object is a prominent subject in the image captured by the camera.
- how the terminal device 100 detects the prominent subject in the image can refer to the relevant description of the aforementioned S104A, which will not be repeated here.
- the terminal device 100 may also determine the target object in other ways, which are not specifically limited here. It can be understood that in the embodiment of the present application, the terminal device 100 can perform real-time detection and continuous tracking of the target object in the image collected by the camera.
- the terminal device 100 determines the moving speed of the target object based on the image collected by the camera.
- the terminal device 100 determines the moving speed of the target object based on the latest two frames of images (ie, image 3 and image 4 ) including the target object captured by the camera.
- the terminal device 100 uses a preset optical flow algorithm to determine the optical flow intensity 1 of the target object between the image 3 and the image 4 , and then determines the moving speed of the target object based on the optical flow intensity 1 .
- the terminal device 100 stores the correspondence between the vector modulus of the optical flow intensity and the movement speed, and based on the correspondence, the terminal device 100 determines that the movement speed corresponding to the optical flow intensity 1 is the movement speed of the target object. In some embodiments, the vector modulus of the optical flow intensity 1 is equal to the moving speed of the target object.
- optical flow is the instantaneous velocity of a spatially moving object moving on an imaging plane (such as an image captured by a camera).
- the optical flow is also equivalent to the displacement of the target point.
- optical flow expresses the intensity of image changes, and it contains the motion information of objects between adjacent frames.
- the coordinates of feature point 1 of the target object in image 3 are (x1, y1)
- the coordinates of feature point 1 of the target object in image 4 are (x2, y2)
- feature point 1 is in image 3 and
- the optical flow intensity between images 4 can be expressed as a two-dimensional vector (x2-x1, y2-y1). The greater the optical flow intensity of feature point 1, the larger the movement range and faster movement speed of feature point 1; the smaller the optical flow intensity of feature point 1, the smaller the movement range and slower movement speed of feature point 1.
- the terminal device 100 determines the optical flow intensities of K feature points of the target object between image 3 and image 4 , and then determines the optical flow intensities of the target object based on the optical flow intensities of the K feature points.
- the optical flow intensity of the target object is an average value of the two-dimensional vectors of the optical flow intensities of the above K feature points.
- the determination of the moving speed of the target object is not limited to the optical flow intensity, and the embodiment of the present application may also acquire the moving speed of the target object in other ways, which is not specifically limited here.
- the terminal device 100 determines whether the current ambient light brightness is greater than the brightness threshold 3; when the current ambient light brightness is greater than the brightness threshold 3, execute S205; when the current ambient light brightness is less than or equal to the brightness threshold 3, execute S206.
- the terminal device 100 Based on the second corresponding relationship between the motion speed and the aperture gear, and the motion speed of the target object, the terminal device 100 adjusts the aperture gear.
- the terminal device 100 Based on the third correspondence between the motion speed and the aperture gear, and the motion speed of the target object, the terminal device 100 adjusts the aperture gear; wherein, comparing the second correspondence with the third correspondence, the same aperture gear is in the second The corresponding motion speed in the corresponding relationship is lower; in the second corresponding relationship and the third corresponding relationship, when the speed 1 is greater than the speed 2, the aperture gear corresponding to the speed 1 is less than or equal to the aperture gear corresponding to the speed 2.
- the second correspondence (or among the third correspondences) includes: a correspondence between at least one speed range and at least one aperture gear.
- the above at least one motion speed range corresponds to at least one aperture gear.
- one or more speed intervals in the second correspondence (or the third correspondence) correspond to one aperture gear.
- the aperture of the terminal device 100 includes 5 adjustable aperture positions (for example, f/1.4, f/2, f/2.8, f/4, f/6), and f/2 is the default aperture position.
- the terminal device 100 determines that it is currently in a high-brightness environment, and the terminal device 100 determines the aperture gear corresponding to the movement speed of the target object based on the second correspondence; wherein , the second corresponding relationship includes three speed ranges of low speed, medium speed and high speed.
- the low speed range is [0,1)m/s
- the medium speed range is [1,2.5)m/s
- the high speed range is [2.5 , ⁇ ) m/s
- the aperture gear corresponding to the low-speed range is f/2.8
- the aperture gear corresponding to the medium-speed range is f/4
- the aperture gear corresponding to the high-speed range is f/6.
- the terminal device 100 determines that it is currently in a non-highlight environment, and the terminal device 100 determines the aperture gear corresponding to the moving speed of the target object based on the third correspondence; wherein, the third correspondence includes There are two speed ranges of low speed and medium high speed, for example, the low speed range is [0,1)m/s, the medium speed range is [1, ⁇ )m/s; the aperture gear corresponding to the low speed range is f/1.4, and the medium speed range is f/1.4 The aperture corresponding to the high-speed range is f/2.
- finer speed intervals can be divided, and a finer aperture gear switching strategy can be implemented according to the moving speed of the target object.
- the terminal device 100 when the terminal device 100 adjusts the aperture gear, it automatically adjusts the exposure time and ISO accordingly, so that the value of (exposure time*ISO/aperture gear f value) changes before and after the aperture gear adjustment
- the degree is kept within a first preset range, for example, the first preset range is ⁇ 15%. In this way, it can be ensured that the image brightness of the image captured by the camera changes smoothly before and after the aperture gear is switched.
- the terminal device uses an AF algorithm to focus the camera on the target object. For a specific implementation manner, reference may be made to related embodiments of focusing the camera on a prominent subject in step S106 , which will not be repeated here.
- the image processing algorithms described above are used to remove motion blur from the image 5 .
- the image of each frame in raw format captured by the camera is preprocessed and converted into an image in yuv format;
- the optical flow information of the moving subject in 5; the above optical flow information and image 5 are used as the input of the neural network model of the deblur algorithm, and the output of the neural network model of the deblur algorithm is the image after removing the motion blur in image 5.
- motion blur also known as dynamic blur
- dynamic blur is the moving effect of the subject in the image collected by the camera, which appears more obviously in the case of long exposure or fast moving of the subject.
- the frame rate refers to the number of static images that the terminal device 100 can capture per second.
- the image processing algorithm can also be used to perform other image processing on the image 5, which is not specifically limited here. For example, adjust the saturation, color temperature and/or contrast of the image 5, perform portrait optimization on the portrait in the image 5, and the like.
- the terminal device 100 may lower the aperture gear to widen the depth of field, thereby improving the focusing accuracy and focusing speed of the fast-moving target object, and obtaining clear imaging of the target object.
- the software system architecture of the terminal device 100 involved in the embodiment of the present application is illustrated below by way of example.
- the software system of the terminal device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
- an Android system with a layered architecture is taken as an example to illustrate the software structure of the terminal device 100 .
- FIG. 11 shows a software system architecture diagram of the terminal device 100 provided in the embodiment of the present application.
- the terminal device 100 can adaptively adjust the aperture gear and other shooting parameters (such as ISO, exposure time, etc.), so that the camera can capture images with appropriate depth of field and brightness, and improve focusing speed and focusing accuracy.
- the aperture gear and other shooting parameters such as ISO, exposure time, etc.
- the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
- the Android system can be divided into an application program layer, an application program framework layer, a hardware abstraction layer (hardware abstraction layer, HAL) layer and a kernel layer (kernel) from top to bottom.
- HAL hardware abstraction layer
- kernel layer kernel layer
- the application layer includes a series of application packages, such as camera application, live broadcast application, instant messaging application and so on.
- the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
- the application framework layer includes some predefined functions. As shown in Figure 11, the application framework layer includes a target person detection module, a prominent subject detection module, an ambient light detection module, a depth determination module, an aperture gear switching module, an aperture motor drive module, an AF module, and a focus motor drive module.
- the application framework layer may also add a motion detector component (motion detector), which is used to perform logical judgment on the acquired input event and identify the type of the input event.
- the input event is determined to be a knuckle touch event or a finger pad touch event, etc., based on the touch coordinates included in the input event, the time stamp of the touch operation, and other information.
- the motion detection component can also record the trajectory of the input event, determine the gesture rule of the input event, and respond to different operations according to different gestures.
- the HAL layer and the kernel layer are used to perform corresponding operations in response to functions called by system services in the application framework layer.
- the kernel layer is the layer between hardware and software.
- the kernel layer can contain camera drivers, display drivers, ambient light sensor drivers, and more. Wherein, the camera driving may include focusing motor driving and aperture motor driving.
- the application program (such as camera application, live broadcast application) calls the interface of the application framework layer, starts the shooting function, and then calls the camera driver in the kernel layer to drive the camera based on the default
- the aperture gear continuously collects images, and calls the display driver to drive the display to display the above images.
- the salient subject detection module is used to detect the salient subject in the image captured by the camera, and determine the area 1 where the salient subject is located in the above image; where the salient subject is the projection subject of the user's line of sight in the above image.
- the target person detection module is used to detect the target person in the image collected by the camera, and determine the area 2 where the target person is in the above-mentioned image; wherein, the target person is the largest in the above-mentioned image, the smallest in depth and/or the center closest to the above-mentioned image position of the person.
- the ambient light detection module is used to detect the current ambient light brightness.
- the depth determination module is used to determine the depth of the prominent subject based on the area 1 where the prominent subject is located, and determine the depth of the target person based on the area 2 where the target person is located.
- the aperture gear switching module is used to determine the currently required aperture gear and the target focus object based on the brightness of the ambient light, the depth of the prominent subject and the depth of the target person.
- the aperture motor drive module is used to determine the aperture motor code value corresponding to the currently required aperture gear, and determine the current (or voltage) value of the aperture motor corresponding to the currently required aperture gear based on the aperture motor code value.
- the AF module is used to determine the target focus position by using an AF algorithm based on the depth of the target focus object and the position of the area where the target focus object is located.
- the focus motor drive module is used to determine the focus motor code value corresponding to the target focus position, and determine the focus motor current (or voltage) value corresponding to the target focus position based on the focus motor code value.
- the aperture motor drive adjusts the aperture position based on the current (or voltage) value of the aperture motor issued by the aperture motor drive module; the focus motor drive adjusts the focus position based on the current (or voltage) value of the focus motor issued by the focus motor drive module , so that the camera focuses on the target focus object.
- the terminal device uses the camera to collect images based on the adjusted aperture gear and focus position, and calls the display driver to drive the display to display the above images.
- FIG. 12 shows another software system architecture diagram of the terminal device 100 provided in the embodiment of the present application.
- the terminal device 100 can automatically adjust the aperture gear based on the ambient light brightness and the movement speed of the target object in the image captured by the camera. In order to enable the camera to capture images with appropriate depth of field and brightness, and improve the focusing speed and focusing accuracy.
- the application framework layer includes a target object detection module, an ambient light detection module, a movement speed determination module, an aperture gear switching module, an aperture motor drive module, an AF module, a focus motor drive module and an image processing module.
- the camera application in response to the received instruction for starting the capture mode, calls the interface of the application framework layer to start the shooting function in the capture mode, and then calls the camera driver in the kernel layer to drive the camera based on the default
- the aperture gear continuously collects images, and calls the display driver to drive the display to display the above images.
- the target object detection module is used to detect the target object in the image collected by the camera.
- the movement speed determination module is used to determine the movement speed of the target object based on the images collected by the camera.
- the ambient light detection module is used to detect the current ambient light brightness.
- the aperture gear switching module is used to determine the currently required aperture gear based on the ambient light brightness and the moving speed of the target object.
- the aperture motor drive module is used to determine the aperture motor code value corresponding to the currently required aperture gear, and determine the current (or voltage) value of the aperture motor corresponding to the currently required aperture gear based on the aperture motor code value.
- the AF module is used to determine the target focus position by using an AF algorithm based on the depth of the target object and the position of the area where the target object is located.
- the focus motor drive module is used to determine the focus motor code value corresponding to the target focus position, and determine the focus motor current (or voltage) value corresponding to the target focus position based on the focus motor code value.
- the aperture motor drive adjusts the aperture position based on the current (or voltage) value of the aperture motor issued by the aperture motor drive module; the focus motor drive adjusts the focus position based on the current (or voltage) value of the focus motor issued by the focus motor drive module , so that the camera focuses on the target object.
- the terminal device 100 uses the camera to collect images based on the adjusted aperture position and focus position.
- the image processing module is used to perform image processing on the above-mentioned image by using a preset image processing algorithm to eliminate motion blur in the above-mentioned image, and can also adjust the saturation, color temperature and/or contrast of the above-mentioned image, and can also perform image processing on the portrait in the above-mentioned image Perform portrait optimization, etc.
- the terminal device 100 invokes the display driver to drive the display screen to display the image-processed captured image.
- all or part of them may be implemented by software, hardware, firmware or any combination thereof.
- software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the present application will be generated in whole or in part.
- the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
- the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
- the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), or a semiconductor medium (such as a solid state disk (solid state disk, SSD)), etc.
- the processes can be completed by computer programs to instruct related hardware.
- the programs can be stored in computer-readable storage media.
- When the programs are executed may include the processes of the foregoing method embodiments.
- the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (17)
- 一种拍摄方法,其特征在于,应用于终端设备,所述终端设备包括摄像头,所述摄像头配置有可调光圈,所述方法包括:响应于第一指令,所述终端设备启动所述摄像头基于默认光圈档位采集图像;检测所述摄像头采集的第一图像是否包括显著主体和目标人物;当检测到所述第一图像包括所述显著主体和所述目标人物,基于所述显著主体的深度和所述目标人物的深度,确定目标对焦对象以及目标光圈档位;所述摄像头对焦到所述目标对焦对象,并基于所述目标光圈档位采集图像。
- 根据权利要求1所述的方法,其特征在于,当检测到所述第一图像包括所述显著主体和所述目标人物,基于所述显著主体的深度和所述目标人物的深度,确定目标对焦对象以及目标光圈档位,具体包括:当检测到所述第一图像包括所述显著主体和所述目标人物,所述显著主体和所述目标人物为不同对象,且所述显著主体的深度和所述目标人物的深度满足第一预设条件时,确定所述显著主体为所述目标对焦对象,并基于所述显著主体的深度确定所述目标光圈档位;当检测到所述第一图像包括所述显著主体和所述目标人物,所述显著主体和所述目标人物为不同对象,且所述显著主体的深度和所述目标人物的深度不满足所述第一预设条件时,确定所述目标人物为所述目标对焦对象,并确定所述目标光圈档位。
- 根据权利要求2所述的方法,其特征在于,第一预设条件包括:所述显著主体的深度小于所述目标人物的深度,且所述显著主体的深度和所述目标人物的深度的深度差值大于差值阈值。
- 根据权利要求2所述的方法,其特征在于,所述终端设备存储有深度和光圈档位的第一对应关系,所述基于所述显著主体的深度确定所述目标光圈档位,包括:基于所述第一对应关系,确定所述显著主体的深度对应的光圈档位为所述目标光圈档位,所述显著主体的深度越小,所述目标光圈档位越小。
- 根据权利要求4所述的方法,其特征在于,第一对应关系包括所述可调光圈的N个光圈档位以及M个连续的深度区间的对应关系,所述M个连续的深度区间中一或多个深度区间对应所述N个光圈档位中的一个光圈档位,所述N和所述M为大于1的正整数。
- 根据权利要求1所述的方法,其特征在于,所述基于所述目标光圈档位采集图像之前,还包括:基于所述目标光圈档位确定目标曝光时间和目标感光度,第一值到第二值的变化程度小于第一预设范围,其中,所述第一值是基于当前的光圈档位、当前的曝光时间和当前的感光度确定的,所述第二值是基于所述目标光圈档位、所述目标曝光时间和所述目标感光度确定的;所述基于所述目标光圈档位采集图像,包括:基于所述目标光圈档位、所述目标曝光时间和所述目标感光度采集图像。
- 根据权利要求1所述的方法,其特征在于,所述检测所述摄像头采集的第一图像是否包括显著主体和目标人物之前,还包括:检测当前的环境光亮度是否大于第一亮度阈值;所述检测所述摄像头采集的第一图像是否包括显著主体和目标人物,包括:当检测到所述环境光亮度大于所述第一亮度阈值时,检测所述摄像头采集的第一图像是否包括显著主体和目标人物。
- 根据权利要求1所述的方法,其特征在于,当检测到所述第一图像包括所述显著主体和所述目标人物,基于所述显著主体的深度和所述目标人物的深度,确定目标对焦对象以及目标光圈档位,具体包括:当检测到所述第一图像包括所述显著主体和所述目标人物,且所述显著主体和所述目标人物为同一物品时,确定所述显著主体为所述目标对焦对象,并基于所述显著主体的深度确定所述目标光圈档位;当检测到所述第一图像包括所述显著主体和所述目标人物,且所述显著主体和所述目标人物为同一人物时,确定所述目标人物为所述目标对焦对象,并确定所述目标光圈档位。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:当检测到所述第一图像包括所述显著主体,不包括所述目标人物时,确定所述显著主体为所述目标对焦对象,并基于所述显著主体的深度确定所述目标光圈档位;当检测到所述第一图像包括所述目标人物,不包括所述显著主体时,确定所述目标人物为所述目标对焦对象,并确定所述目标光圈档位。
- 根据权利要求2、8和9所述的方法,其特征在于,所述并确定所述目标光圈档位,具体包括:确定所述目标光圈档位为所述默认光圈档位。
- 根据权利要求2、8和9所述的方法,其特征在于,所述并确定所述目标光圈档位,具体包括:基于当前的环境光亮度确定所述目标光圈档位。
- 根据权利要求2、8和9所述的方法,其特征在于,所述并确定所述目标光圈档位,具体包括:基于所述目标人物的深度确定所述目标光圈档位。
- 根据权利要求11所述的方法,其特征在于,当所述环境光亮度大于第二亮度阈值时,所述目标光圈档位为第一光圈档位;当所述环境光亮度小于等于第三亮度阈值时,所述目标光圈档位为第二光圈档位,所述默认光圈档位小于所述第二光圈档位,且大于所述第一光圈档位。
- 根据权利要求7所述的方法,其特征在于,所述检测当前的环境光亮度是否大于第一亮度阈值之后,还包括:当检测到所述环境光亮度小于所述第一亮度阈值时,确定所述目标光圈档位为所述默认光圈档位。
- 一种终端设备,包括摄像头,存储器,一个或多个处理器,多个应用程序,以及一个或多个程序;其中,所述摄像头配置有可调光圈,所述一个或多个程序被存储在所述存储器中;其特征在于,所述一个或多个处理器在执行所述一个或多个程序时,使得所述终端设备实现如权利要求1至14任一项所述的方法。
- 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在终端设备上运行时,使得所述电子设备执行如权利要求1至14任一项所述的方法。
- 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1至14任一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22852016.9A EP4366289A1 (en) | 2021-07-31 | 2022-07-28 | Photographing method and related apparatus |
BR112024002006A BR112024002006A2 (pt) | 2021-07-31 | 2022-07-28 | Método de fotografia e aparelho relacionado |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110876921.4 | 2021-07-31 | ||
CN202110876921.4A CN115484383B (zh) | 2021-07-31 | 2021-07-31 | 拍摄方法及相关装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023011302A1 true WO2023011302A1 (zh) | 2023-02-09 |
Family
ID=84419621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/108502 WO2023011302A1 (zh) | 2021-07-31 | 2022-07-28 | 拍摄方法及相关装置 |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4366289A1 (zh) |
CN (2) | CN117544851A (zh) |
BR (1) | BR112024002006A2 (zh) |
WO (1) | WO2023011302A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116320716A (zh) * | 2023-05-25 | 2023-06-23 | 荣耀终端有限公司 | 图片采集方法、模型训练方法及相关装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140375798A1 (en) * | 2013-06-20 | 2014-12-25 | Casio Computer Co., Ltd. | Imaging apparatus and imaging method for imaging target subject and storage medium |
JP2015230414A (ja) * | 2014-06-05 | 2015-12-21 | キヤノン株式会社 | 撮像装置、制御方法およびプログラム |
JP2018042092A (ja) * | 2016-09-07 | 2018-03-15 | キヤノン株式会社 | 画像処理装置、撮像装置、制御方法およびプログラム |
CN110177207A (zh) * | 2019-05-29 | 2019-08-27 | 努比亚技术有限公司 | 逆光图像的拍摄方法、移动终端及计算机可读存储介质 |
JP2020067503A (ja) * | 2018-10-22 | 2020-04-30 | キヤノン株式会社 | 撮像装置、監視システム、撮像装置の制御方法およびプログラム |
CN111935413A (zh) * | 2019-05-13 | 2020-11-13 | 杭州海康威视数字技术股份有限公司 | 一种光圈控制方法及摄像机 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104506770A (zh) * | 2014-12-11 | 2015-04-08 | 小米科技有限责任公司 | 拍摄图像的方法及装置 |
CN109544618B (zh) * | 2018-10-30 | 2022-10-25 | 荣耀终端有限公司 | 一种获取深度信息的方法及电子设备 |
CN110493527B (zh) * | 2019-09-24 | 2022-11-15 | Oppo广东移动通信有限公司 | 主体对焦方法、装置、电子设备和存储介质 |
-
2021
- 2021-07-31 CN CN202311350536.1A patent/CN117544851A/zh active Pending
- 2021-07-31 CN CN202110876921.4A patent/CN115484383B/zh active Active
-
2022
- 2022-07-28 WO PCT/CN2022/108502 patent/WO2023011302A1/zh active Application Filing
- 2022-07-28 EP EP22852016.9A patent/EP4366289A1/en active Pending
- 2022-07-28 BR BR112024002006A patent/BR112024002006A2/pt unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140375798A1 (en) * | 2013-06-20 | 2014-12-25 | Casio Computer Co., Ltd. | Imaging apparatus and imaging method for imaging target subject and storage medium |
JP2015230414A (ja) * | 2014-06-05 | 2015-12-21 | キヤノン株式会社 | 撮像装置、制御方法およびプログラム |
JP2018042092A (ja) * | 2016-09-07 | 2018-03-15 | キヤノン株式会社 | 画像処理装置、撮像装置、制御方法およびプログラム |
JP2020067503A (ja) * | 2018-10-22 | 2020-04-30 | キヤノン株式会社 | 撮像装置、監視システム、撮像装置の制御方法およびプログラム |
CN111935413A (zh) * | 2019-05-13 | 2020-11-13 | 杭州海康威视数字技术股份有限公司 | 一种光圈控制方法及摄像机 |
CN110177207A (zh) * | 2019-05-29 | 2019-08-27 | 努比亚技术有限公司 | 逆光图像的拍摄方法、移动终端及计算机可读存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116320716A (zh) * | 2023-05-25 | 2023-06-23 | 荣耀终端有限公司 | 图片采集方法、模型训练方法及相关装置 |
CN116320716B (zh) * | 2023-05-25 | 2023-10-20 | 荣耀终端有限公司 | 图片采集方法、模型训练方法及相关装置 |
Also Published As
Publication number | Publication date |
---|---|
CN117544851A (zh) | 2024-02-09 |
EP4366289A1 (en) | 2024-05-08 |
CN115484383B (zh) | 2023-09-26 |
CN115484383A (zh) | 2022-12-16 |
BR112024002006A2 (pt) | 2024-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113132620B (zh) | 一种图像拍摄方法及相关装置 | |
US11800221B2 (en) | Time-lapse shooting method and device | |
EP4044580B1 (en) | Capturing method and electronic device | |
EP3893491A1 (en) | Method for photographing the moon and electronic device | |
US20210203836A1 (en) | Camera switching method for terminal, and terminal | |
JP7403551B2 (ja) | 記録フレームレート制御方法及び関連装置 | |
US11949978B2 (en) | Image content removal method and related apparatus | |
WO2021078001A1 (zh) | 一种图像增强方法及装置 | |
EP3873084B1 (en) | Method for photographing long-exposure image and electronic device | |
WO2023273323A9 (zh) | 一种对焦方法和电子设备 | |
CN114140365B (zh) | 基于事件帧的特征点匹配方法及电子设备 | |
CN113810603B (zh) | 点光源图像检测方法和电子设备 | |
CN113625860A (zh) | 模式切换方法、装置、电子设备及芯片系统 | |
EP4175285A1 (en) | Method for determining recommended scene, and electronic device | |
WO2021179186A1 (zh) | 一种对焦方法、装置及电子设备 | |
WO2023011302A1 (zh) | 拍摄方法及相关装置 | |
CN115150542B (zh) | 一种视频防抖方法及相关设备 | |
WO2022033344A1 (zh) | 视频防抖方法、终端设备和计算机可读存储介质 | |
CN116055872B (zh) | 图像获取方法、电子设备和计算机可读存储介质 | |
CN118474522A (zh) | 拍照方法、终端设备、芯片及存储介质 | |
CN118552452A (zh) | 去除摩尔纹的方法及相关装置 | |
CN118368462A (zh) | 一种投屏方法 | |
CN115268742A (zh) | 一种生成封面的方法及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2022852016 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022852016 Country of ref document: EP Effective date: 20240105 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112024002006 Country of ref document: BR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 112024002006 Country of ref document: BR Kind code of ref document: A2 Effective date: 20240131 |