CN107071293B - Shooting device, method and mobile terminal - Google Patents

Shooting device, method and mobile terminal Download PDF

Info

Publication number
CN107071293B
CN107071293B CN201710189250.8A CN201710189250A CN107071293B CN 107071293 B CN107071293 B CN 107071293B CN 201710189250 A CN201710189250 A CN 201710189250A CN 107071293 B CN107071293 B CN 107071293B
Authority
CN
China
Prior art keywords
image
shooting
subject
module
main body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710189250.8A
Other languages
Chinese (zh)
Other versions
CN107071293A (en
Inventor
吴苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201710189250.8A priority Critical patent/CN107071293B/en
Publication of CN107071293A publication Critical patent/CN107071293A/en
Application granted granted Critical
Publication of CN107071293B publication Critical patent/CN107071293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Abstract

The invention provides a shooting device, a shooting method and a mobile terminal.A first image of a main body is shot under a normal exposure condition through a first shooting module, and a second image of the main body is shot under a dark light condition through a second shooting module, wherein the brightness of the main body is greater than that of a background, and the actual distance between the main body and the background is greater than or equal to a preset distance; stripping the image of the shooting subject from the first image to obtain a target image, wherein the target image comprises the image of the shooting subject; and superposing the object image and the second image to form a composite image. By implementing the method, the contrast between the object image acquired under normal exposure and the brightness and darkness under the dark light condition is obvious, the position of the shooting main body in the synthetic image can be obviously highlighted by overlapping the object image and the darkness, and the shooting main body is true and natural.

Description

Shooting device, method and mobile terminal
Technical Field
The present invention relates to the field of photographing and photography, and more particularly, to a photographing apparatus, method and mobile terminal.
Background
The photographing and shooting play an increasingly important role in the life of people, which is a very convenient way for recording life, and the user can save the current shooting moment only by lightly pressing a shooting key. For the shot pieces, in the prior art, because the shot of a single lens is more, the piece effect is often fixed, although the picture can be set in an early stage or repaired in a later stage, the processing is also carried out on the whole to adjust the pieces; if the user wants to highlight the shooting subject in the shot image, such as a portrait or a lotus, the expected effect is difficult to achieve through early or late image correction.
Disclosure of Invention
The invention aims to solve the technical problems of how to avoid the problems of unclear shooting main body and fuzzy subject in the prior shooting technology; to solve the technical problem, there is provided a photographing apparatus including:
the shooting module is used for shooting a first image of the main body under a normal exposure condition through the first shooting module, and simultaneously shooting a second image of the main body under a dark light condition through the second shooting module; wherein the brightness of the shooting subject is greater than that of the background;
the peeling module is used for peeling the image of the shooting subject from the first image to obtain an object image; the object image includes an image of the subject;
and the synthesis module is used for superposing the object image and the second image to obtain a synthesized image.
Optionally, the peeling module is further configured to:
determining focal length information when the first image is obtained;
determining the range of the image of the shooting subject in the first image according to the focal length information when the first image is obtained and the actual size of the shooting subject;
and stripping the image of the shooting subject from the first image.
Optionally, the peeling module is further configured to: and determining the range of the image of the shooting subject in the first image through the chromaticity and/or brightness distribution in the first image.
Optionally, the system further comprises a feathering module, configured to perform feathering on the object image.
The invention also provides a mobile terminal which comprises the shooting device.
The invention also provides a shooting method, which comprises the following steps:
acquiring a first image of a main body under a normal exposure condition through a first camera module; meanwhile, a second image of the shooting main body is collected through a second camera module under the dark light condition; wherein the brightness of the shooting subject is greater than that of the background;
stripping the image of the shooting subject from the first image to obtain a target image; the object image includes an image of the subject;
and superposing the object image and the second image to form a composite image.
Optionally, the peeling the image of the subject from the first image includes:
determining focal length information when the first image is obtained;
determining the range of the image of the shooting subject in the first image according to the focal length information when the first image is obtained and the actual size of the shooting subject;
and stripping the image of the shooting subject from the first image.
Optionally, the method further includes: and determining the range of the image of the shooting subject in the first image through the chromaticity and/or brightness distribution in the first image.
Optionally, the feathering process is performed on the object image.
Optionally, gather under the dim light condition through the second module of making a video recording, the second image of shooting the main part includes: and (3) adjusting the sensitivity and/or aperture of the second camera module to be low, so that the imaging brightness of the second camera module is low.
Advantageous effects
The invention provides a shooting device, a shooting method and a mobile terminal.A first image of a main body is shot under a normal exposure condition through a first shooting module, and a second image of the main body is shot under a dark light condition through a second shooting module, wherein the brightness of the main body is greater than that of a background, and the actual distance between the main body and the background is greater than or equal to a preset distance; stripping the image of the shooting subject from the first image to obtain a target image, wherein the target image comprises the image of the shooting subject; and superposing the object image and the second image to form a composite image. By implementing the method, the contrast between the object image acquired under normal exposure and the brightness and darkness under the dark light condition is obvious, the position of the shooting main body in the synthetic image can be obviously highlighted by overlapping the object image and the darkness, and the shooting main body is true and natural.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a schematic diagram of a hardware structure of an alternative mobile terminal for implementing various embodiments of the present invention;
FIG. 2 is an electrical schematic diagram of an alternative camera for implementing various embodiments of the invention;
FIG. 3 is a schematic diagram of a camera according to a first embodiment of the present invention;
fig. 4 is a schematic diagram of a dual-camera mobile terminal according to a first embodiment of the present invention;
fig. 5 is a schematic view of a first image captured by a first camera module according to a first embodiment of the present disclosure;
fig. 6 is a schematic diagram of a second image acquired by a second camera module according to the first embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating an object image stripped from a first image according to a first embodiment of the present invention;
fig. 8 is a schematic diagram illustrating an object image and a second image superimposed to form a composite image according to a first embodiment of the present invention;
fig. 9 is a schematic diagram illustrating a mobile terminal according to a second embodiment of the present invention;
fig. 10 is a flowchart of a shooting method according to a third embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, the suffix of "unit" used to denote an element is used only for facilitating the description of the present invention, and has no specific meaning in itself.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. In the following, it is assumed that the terminal is a mobile terminal, however, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes. The mobile terminal in this embodiment may implement the photographing apparatus in each embodiment of the present invention.
Fig. 1 is a schematic diagram of a hardware structure of an optional mobile terminal for implementing various embodiments of the present invention.
The mobile terminal 100 may include a wireless communication unit 110, an a/V (audio/video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190, etc. Fig. 1 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented, and that more or fewer components may instead be implemented, the elements of the mobile terminal being described in detail below.
The wireless communication unit 110 typically includes one or more components that allow radio communication between the mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit may include at least one of a mobile communication unit 112, a wireless internet unit 113, a short range communication unit 114, and a location information unit 115.
The mobile communication unit 112 transmits and/or receives radio signals to and/or from at least one of a base station (e.g., access point, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received according to text and/or multimedia messages.
The wireless internet access technology to which the unit refers may include W L AN (wireless L AN) (Wi-Fi), Wibro (wireless broadband), Wimax (worldwide interoperability for microwave Access), HSDPA (high speed Downlink packet Access), and the like.
The short-range communication unit 114 is a unit for supporting short-range communication. Some examples of short-range communication technologies include bluetooth (TM), Radio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), zigbee (TM), and the like.
The location information unit 115 is a unit for checking or acquiring location information of the mobile terminal. A typical example of the location information unit is a GPS (global positioning system). According to the current technology, the GPS unit 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information, thereby accurately calculating three-dimensional current location information according to longitude, latitude, and altitude. Currently, a method for calculating position and time information uses three satellites and corrects an error of the calculated position and time information by using another satellite. In addition, the GPS unit 115 can calculate speed information by continuously calculating current position information in real time.
The a/V input unit 120 is used to receive an audio or video signal. The a/V input unit 120 may include a camera 121 and a microphone 1220, and the camera 121 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the cameras 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the construction of the mobile terminal. The microphone s122 can receive sounds (audio data) via the microphone in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the mobile communication unit 112 in case of the phone call mode. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The user input unit 130 may generate key input data according to a command input by a user to control various operations of the mobile terminal. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The sensing unit 140 detects a current state of the mobile terminal 100 (e.g., an open or closed state of the mobile terminal 100), a position of the mobile terminal 100, presence or absence of contact (i.e., touch input) by a user with the mobile terminal 100, an orientation of the mobile terminal 100, acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling an operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide-type mobile phone, the sensing unit 140 may sense whether the slide-type phone is opened or closed. In addition, the sensing unit 140 can detect whether the power supply unit 190 supplies power or whether the interface unit 170 is coupled with an external device. The sensing unit 140 may include a light sensor 141.
The interface unit 170 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification unit, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification unit may store various information for authenticating a user using the mobile terminal 100 and may include a user identity Unit (UIM), a subscriber identity unit (SIM), a universal subscriber identity Unit (USIM), and the like. In addition, a device having an identification unit (hereinafter, referred to as an "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 100 via a port or other connection means. The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal and the external device.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal is accurately mounted on the cradle. The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner.
The output unit 150 may include a display unit 151, an audio output unit 152, and the like.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are stacked on each other in the form of a layer TO form a touch screen, the display unit 151 may be used as an input device and an output device, the display unit 151 may include at least one of a liquid crystal display (L CD), a thin film transistor L CD (TFT-L CD), an organic light emitting diode (O L ED) display, a flexible display, a three-dimensional (3D) display, and the like, some of these displays may be configured TO be transparent TO allow a user TO view from the outside, which may be referred TO as a transparent display, a typical transparent display may be, for example, a TO L ED (transparent organic light emitting diode) display, and the like, the mobile terminal 100 may include two or more display units (or other display devices) according TO a specific intended embodiment, for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown), and the touch screen may be used TO detect a touch input pressure and a touch input position and a touch input area.
The audio output unit 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 into an audio signal and output as sound when the mobile terminal is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 152 may provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 152 may include a speaker, a buzzer, and the like.
The memory 160 may store software programs and the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, and the like) that has been or will be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia unit 181 for reproducing (or playing back) multimedia data, and the multimedia unit 181 may be constructed within the controller 180 or may be constructed separately from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a programmable logic device (P L D), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, such an implementation may be implemented in the controller 180 in some cases.
Up to this point, mobile terminals have been described in terms of their functionality. Hereinafter, a slide-type mobile terminal among various types of mobile terminals, such as a folder-type, bar-type, swing-type, slide-type mobile terminal, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
Fig. 2 is an electrical block diagram of an alternative camera implementing various embodiments of the invention.
The photographing lens 1211 is composed of a plurality of optical lenses for forming an object image, and is a single focus lens or a zoom lens. The photographing lens 1211 is movable in the optical axis direction under the control of the lens driver 1221, and the lens driver 1221 controls the focal position of the photographing lens 1211 in accordance with a control signal from the lens driving control circuit 1222. The lens drive control circuit 1222 performs drive control of the lens driver 1221 in accordance with a control command from the microcomputer 1217.
An image pickup device 1212 is disposed on the optical axis of the photographing lens 1211 near the position of the object image formed by the photographing lens 1211. The image pickup device 1212 is used to pick up an image of an object and acquire picked-up image data. Photodiodes constituting each pixel are two-dimensionally arranged in a matrix on the image pickup device 1212. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to each photodiode. A bayer RGB color filter is disposed on the front surface of each pixel.
The image pickup device 1212 is connected to an image pickup circuit 1213, and the image pickup circuit 1213 performs charge accumulation control and image signal reading control in the image pickup device 1212, performs waveform shaping after reducing reset noise for the read image signal (analog image signal), and further performs gain improvement or the like so as to obtain an appropriate signal level.
The imaging circuit 1213 is connected to an a/D converter 1214, and the a/D converter 1214 performs analog-to-digital conversion on the analog image signal and outputs a digital image signal (hereinafter referred to as image data) to the bus 1227.
The bus 1227 is connected to the a/D converter 1214, and further connected to an image processor 1215, a JPEG processor 1216, a microcomputer 1217, an SDRAM (Synchronous Dynamic random access memory) 1218, and a memory interface (hereinafter referred to as memory I/F)1219, L CD (L liquid Crystal Display) driver 1220.
The image processor 1215 performs various image processing such as OB subtraction, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, synchronization processing, and edge processing on image data output from the image pickup device 1212, and the JPEG processor 1216 compresses image data read from the SDRAM1218 in accordance with the JPEG compression method when recording the image data in the recording medium 1225, decompresses JPEG image data for image reproduction and display, reads a file recorded in the recording medium 1225 when decompressing, performs decompression processing on the JPEG processor 1216, temporarily stores the decompressed image data in the SDRAM1218, and displays the data on the L CD 1226.
The microcomputer 1217 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera. The microcomputer 1217 is connected to an operation unit 1223 and a flash memory 1224.
The operation unit 1223 includes, but is not limited to, physical keys or virtual keys, which may be various input buttons such as a power button, a photographing key, an editing key, a moving image button, a reproduction button, a menu button, a cross key, an OK button, a delete button, and an enlargement button, and operation controls such as various input keys, and which detect operation states of these operation controls.
The detection result is output to the microcomputer 1217. furthermore, a touch panel is provided on the front surface of the L CD1226 as a display, the touch position of the user is detected, and the touch position is output to the microcomputer 1217. the microcomputer 1217 executes various processing sequences corresponding to the operation of the user in accordance with the detection result of the operation position from the operation unit 1223.
The flash memory 1224 stores programs for executing various processing sequences of the microcomputer 1217. The microcomputer 1217 controls the entire camera according to the program. The flash memory 1224 stores various adjustment values of the camera, and the microcomputer 1217 reads the adjustment values and controls the camera in accordance with the adjustment values.
The SDRAM1218 is an electrically rewritable volatile memory for temporarily storing image data and the like. The SDRAM1218 temporarily stores the image data output from the a/D converter 1214 and the image data processed by the image processor 1215, JPEG processor 1216, and the like.
The memory interface 1219 is connected to the recording medium 1225, and performs control for writing and reading image data and data such as a file header added to the image data to and from the recording medium 1225. The recording medium 1225 is, for example, a recording medium such as a memory card that can be attached to and detached from the camera body, but is not limited to this, and may be a hard disk or the like that is built in the camera body.
The L CD driver 1210 is connected to the L CD1226, and stores the image data processed by the image processor 1215 in the SDRAM1218, and reads the image data stored in the SDRAM1218 and displays the image data on the L CD1226 when display is required, or the image data compressed by the JPEG processor 1216 is stored in the SDRAM1218, and the JPEG processor 1216 reads the compressed image data of the SDRAM1218 and decompresses the image data when display is required, and displays the decompressed image data on the L CD 1226.
L CD1226 is disposed on the back surface of the camera body to display images L CD 1226L CD, however, is not limited thereto, and various display panels (L CD1226) such as organic E L may be used, but not limited thereto, and various display panels such as organic E L may be used.
The following is a detailed description of specific examples.
First embodiment
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a shooting device according to a first embodiment of the present invention.
The photographing device in this embodiment includes:
the shooting module 301 is configured to shoot a first image of a subject under a normal exposure condition through a first camera module, and simultaneously shoot a second image of the subject under a dark light condition through a second camera module; the brightness of the shooting subject is greater than that of the background, and the actual distance between the shooting subject and the background is greater than or equal to a preset distance;
a peeling module 302, configured to peel an image of a subject from a first image to obtain a target image; the object image includes an image of a subject;
the synthesizing module 303 is configured to superimpose the object image and the second image to obtain a synthesized image.
More and more mobile terminals are equipped with a dual-camera function, that is, two camera modules are provided, namely a first camera module and a second camera module; when shooting, two modules of making a video recording can independently shoot, synthesize the shooting result of two modules of making a video recording again at last. Based on the double-camera technology, functions such as determination of shooting depth of field, 3D scanning, auxiliary focusing, action recognition and the like which cannot be realized by the single-camera technology can be realized, and even under the condition of the same or similar parameters, a piece shot by the double-camera technology is clearer than that shot by the single-camera. Referring to fig. 4, fig. 4 is a schematic diagram illustrating a terminal using a dual camera technology, and the positions of two camera modules of the terminal are shown. The two camera modules are at the same time, the positions of the images shot by the camera modules are slightly different, because the two camera modules cannot be completely overlapped and have a certain distance, but because the two camera modules shoot simultaneously, the images formed by the shooting main body in the two camera modules at the shooting moment are the same.
In this embodiment, the shooting module 301 captures a first image of the subject under a normal exposure condition through the first camera module. The normal exposure condition refers to a light and shadow condition of a shooting main body in a film, which is required by a user, and the light and shadow condition can be completely carried out by adopting natural light or light supplement carried by the mobile terminal or other external light supplement equipment according to specific requirements. What is desired in this embodiment is to highlight the patch of the subject, and then the brightness of the subject should be brighter and easily distinguishable from the environment. Referring to fig. 5, fig. 5 is a schematic diagram illustrating a first image captured by a first camera module, wherein the subject is a portrait portion of a picture.
The shooting module 301 captures a first image of the subject under a normal exposure condition through the first camera module, and captures a second image of the subject under a dark light condition through the second camera module. Referring to fig. 6, fig. 6 is a schematic diagram illustrating a second image captured by a second camera module, corresponding to fig. 5. The dark light condition mentioned here does not mean that the brightness of the environment needs to be reduced, but the image of the second camera module is dark light, optionally, in this embodiment, the capturing of the second image of the subject by the second camera module under the dark light condition may include: and the light sensitivity and/or the aperture of the second camera shooting module are/is reduced, so that the imaging brightness of the second camera shooting module is low. The lower the light sensitivity, the more exquisite the film is, and the stronger the integrity of the picture is; the aperture is a device for controlling the amount of light that passes through the lens and enters the light sensing surface of the camera module, and is usually disposed in the lens. The size of the expression aperture is expressed in terms of F-number, wherein,
f-value is the focal length F of the lens/the diameter d of the lens aperture; it can be seen that, when the F value of the aperture is larger, the diameter d of the lens aperture is smaller, that is, the clear aperture is smaller, and accordingly, the light-entering amount is smaller. When the amount of light entering the lens is smaller, the image obtained by shooting is darker, and the brightness is lower. Therefore, by adjusting the F value of the aperture, i.e., the actual aperture, to be small, and/or the sensitivity to be low, a darker image can be obtained.
In this embodiment, the first camera module and the second camera module respectively collect images of the shooting subject, and in the subsequent processing, the peeling module 302 needs to peel the images of the shooting subject from the first image to obtain an object image; the peeling module 302 peels the image of the subject from the first image, that is, the image of the subject in the first image is stripped, and other background or foreground portions are discarded, so as to retain the peeled image of the subject and form the object image. In the subject image, only the image of the subject exists to a large extent. Referring to fig. 7, fig. 7 is a schematic diagram illustrating an object image formed by peeling off the subject in fig. 5.
In this embodiment, the image of the subject is stripped from the first image, that is, the stripping module 302 may further be configured to: determining focal length information when the first image is acquired; determining the range of the image of the shooting subject in the first image according to the focal length information when the first image is obtained and the actual size of the shooting subject; the image of the subject is peeled from the first image. Focusing is necessarily required to be carried out when a clear image is shot, the focusing obviously focuses on a shot subject, and the image of the shot subject is the clearest in the obtained first image; the focal plane is not an absolute plane, and images in a certain range before and after the focal plane are considered to be clear if the definition allows, and the range is called as the depth of field. Because the first camera module and the second camera module are applied in the embodiment, the depth of field can be determined according to the images acquired by the two camera modules, namely the first image and the second image, the depth of field is determined, and the range of the image of the shooting subject in the first image can be determined by combining the size characteristics of the shooting subject. The size characteristic of the shooting subject, for example, if the shooting subject is a single person and the focused position is generally on the face, the depth of field can be the thickness of the human body; if the subject is a group, the depth of field is determined according to the position of each individual. Of course, in all cases, it is not necessary to ensure that the entire subject needs to have sufficient sharpness, and it may only be necessary to capture a part of the subject's image that needs sufficient sharpness, and these cases may determine the range of the subject's image in the first image according to the focal length information of the first image and the actual size of the subject.
Furthermore, in this embodiment, the peeling module 302 may further be configured to: the range of the image of the subject in the first image is determined by the chromaticity and/or luminance distribution in the first image. In addition to the range of the subject can be determined by the aforementioned focal length information, it can also be determined by the chromaticity information in the first picture, i.e., the color distribution in the first picture. The shooting main body and the background, or the shooting main body or the background, are spliced in color, the shooting main body and the background have more obvious color change, and when the shooting main body is stripped, the boundary between the shooting main body and the background can be determined by additionally referring to the color change in the first image, so that the range of the shooting main body is determined. And, the brightness change between the subject and the background can also be determined by the brightness information in the first image, i.e. the brightness distribution in the first image; the brightness of the subject is higher than the ambient brightness, and therefore, the range of the subject's image in the first image can be determined according to the change in brightness.
The peeling module 302 forms a subject image including an image of a subject after peeling the image of the subject from the first image; then, the composition module 303 superimposes the object image and the second image to obtain a composite image. Referring to fig. 8, fig. 8 is a schematic diagram illustrating a composite image formed by superimposing the object image and the second image. The second image is shot under the dark light condition, the brightness of the picture is low, no matter the second image is the background or the image of the shot subject, the second image has dark tones, after the object image and the second image are overlapped, the background is still the background shot by the second image, the shot subject becomes the superposition between the object image and the image of the shot subject in the second image, and the image of the shot subject in the normal exposure and the image of the shot subject in the dark light condition are overlapped, so that the image of the shot subject is fuller, and the shot subject is highlighted by matching with the dark background. If the dark light condition of the second image is sufficiently dark, even a completely dark background can be captured, so that the subject image in the resulting composite image is very prominent. Because the first image and the second image are acquired simultaneously, the time deviation of the images does not occur during the composition, for example, the expression, the action and the like of a person are consistent in the first image and the second image. In addition, within the allowable range of the error, for example, the imaging speed of the mobile terminal is fast enough, the shooting can be completed by one camera module, that is, the second image is shot in a short enough time after the first image is shot, but because the first image and the second image are not collected simultaneously in this case, the time difference of the image picture is inevitably generated, thereby affecting the precision of the synthesis.
In addition, in this embodiment, the feathering module 304 may further be included, configured to perform feathering on the object image and perform blurring on the edge portion of the object image. When the feathered object image is superimposed on the second image to obtain a composite image, the transition between the two images can be more natural. The larger the eclosion degree is, the larger the blurring range is, and the softer the color gradually changes; conversely, the smaller the degree of feathering, the smaller the extent of blurring. The specific feathering range may be determined according to the luminance and chrominance difference between the object image and the second image.
The embodiment provides a shooting device, which shoots a first image of a main body under a normal exposure condition through a first shooting module, and shoots a second image of the main body under a dark light condition through a second shooting module, wherein the brightness of the shooting main body is greater than that of a background, and the actual distance between the shooting main body and the background is greater than or equal to a preset distance; stripping the image of the shooting subject from the first image to obtain a target image, wherein the target image comprises the image of the shooting subject; and superposing the object image and the second image to form a composite image. Through the implementation of the embodiment, the contrast between the object image acquired under normal exposure and the brightness under the dark light condition is obvious, the position of the shooting main body in the synthetic image can be obviously highlighted by overlapping the object image and the dark image, and the shooting main body is true and natural.
Second embodiment
Referring to fig. 9, fig. 9 is a schematic diagram of a mobile terminal according to the present embodiment,
the mobile terminal in the embodiment comprises the shooting device, wherein the shooting device comprises a shooting module, a peeling module and a synthesizing module; the shooting module at least includes two camera modules, a first camera module and a second camera module, where the two camera modules can be implemented by the camera 121 in the a/V input unit 12 in the foregoing embodiment, and are the first camera 1211 and the second camera 1212, respectively; in addition, the peeling module and the composition module may be implemented by the controller 180 in the foregoing embodiments. The specific implementation is as follows:
the first camera 1211 acquires a first impression of taking a subject under a normal exposure condition, while the second camera 1212 acquires a second image of taking the subject under a dark light condition; wherein, the brightness of the shooting subject is greater than that of the background; the controller 180 strips the image of the subject from the first image to obtain a subject image; the object image includes an image of a subject; then, the controller 180 superimposes the object image and the second image to obtain a composite image.
The first camera 1211 takes a first image of the subject under the normal exposure condition. The normal exposure condition refers to a light and shadow condition of a shooting main body in a film, which is required by a user, and the light and shadow condition can be completely carried out by adopting natural light or light supplement carried by the mobile terminal or other external light supplement equipment according to specific requirements.
While the first camera 1211 acquires the first image, the second camera 1212 acquires a second image of the subject taken under the dark light condition. The dark light condition mentioned here does not mean that the brightness of the environment needs to be reduced, but the image of the second camera 1212 is dark light, and optionally, in this embodiment, the capturing of the second image of the subject by the second camera 1212 under the dark light condition may include: the sensitivity and/or aperture of the second camera 1212 is adjusted to be low, so that the imaging brightness of the second camera 1212 is low.
The first camera 1211 and the second camera 1212 respectively collect images of the photographic subject, and in the subsequent process, the controller 180 needs to strip the image of the photographic subject from the first image to obtain an object image; the controller 180 peels off the image of the subject from the first image, that is, cuts out the image of the subject in the first image, discards other background or foreground portions, and retains the peeled-off image of the subject to form an object image. In the subject image, only the image of the subject exists to a large extent.
The controller 180 may also be configured to: determining focal length information when the first image is acquired; determining the range of the image of the shooting subject in the first image according to the focal length information when the first image is obtained and the actual size of the shooting subject; the image of the subject is peeled from the first image.
Further, the controller 180 may be further configured to: the range of the image of the subject in the first image is determined by the chromaticity and/or luminance distribution in the first image. In addition to the range of the subject can be determined by the aforementioned focal length information, it can also be determined by the chromaticity information in the first picture, i.e., the color distribution in the first picture. The shooting main body and the background, or the shooting main body or the background, are spliced in color, the shooting main body and the background have more obvious color change, and when the shooting main body is stripped, the boundary between the shooting main body and the background can be determined by additionally referring to the color change in the first image, so that the range of the shooting main body is determined. And, the brightness change between the subject and the background can also be determined by the brightness information in the first image, i.e. the brightness distribution in the first image; the brightness of the subject is higher than the ambient brightness, and therefore, the range of the subject's image in the first image can be determined according to the change in brightness.
The controller 180 forms a subject image including the image of the photographic subject after peeling the image of the photographic subject from the first image; then, the controller 180 superimposes the object image and the second image, thereby obtaining a composite image. The second image is shot under the dark light condition, the brightness of the picture is low, no matter the second image is the background or the image of the shot subject, the second image has dark tones, after the object image and the second image are overlapped, the background is still the background shot by the second image, the shot subject becomes the superposition between the object image and the image of the shot subject in the second image, and the image of the shot subject in the normal exposure and the image of the shot subject in the dark light condition are overlapped, so that the image of the shot subject is fuller, and the shot subject is highlighted by matching with the dark background. If the dark light condition of the second image is sufficiently dark, even a completely dark background can be captured, so that the subject image in the resulting composite image is very prominent.
Furthermore, in this embodiment, the controller 180 may be further configured to: feathering the object image and blurring the edge portion of the object image. When the feathered object image is superimposed on the second image to obtain a composite image, the transition between the two images can be more natural.
The embodiment provides a mobile terminal, which comprises a shooting device, wherein the shooting device comprises a first camera, a second camera and a controller, a first image of a main body is shot under a normal exposure condition through the collection of the first camera, a second image of the main body is shot under a dark light condition through the collection of the second camera, and the image of the main body is stripped from the first image by the controller to obtain an object image; and superposing the object image and the second image to form a composite image. Through the implementation of the embodiment, the contrast between the object image acquired under normal exposure and the brightness under the dark light condition is obvious, the position of the shooting main body in the synthetic image can be obviously highlighted by overlapping the object image and the dark image, and the shooting main body is true and natural.
Third embodiment
Referring to fig. 10, fig. 10 is a flowchart of a shooting method according to a third embodiment of the present invention, including:
s101, shooting a first image of a main body under a normal exposure condition through a first camera module; meanwhile, a second image of the main body is shot under the dark light condition through the second camera module; wherein, the brightness of the shooting subject is greater than that of the background;
s102, stripping the image of the shooting subject from the first image to obtain a target image; the object image includes an image of a subject;
and S103, overlapping the object image and the second image to form a composite image.
In S101, a first image of a main body is shot under a normal exposure condition through a first camera module. The normal exposure condition refers to a light and shadow condition of a shooting main body in a film, which is required by a user, and the light and shadow condition can be completely carried out by adopting natural light or light supplement carried by the mobile terminal or other external light supplement equipment according to specific requirements. What is desired in this embodiment is to highlight the patch of the subject, and then the brightness of the subject should be brighter and easily distinguishable from the environment.
When the first image of shooting the main part under the normal exposure condition is gathered through first module of making a video recording, also through the second module of making a video recording, gather under the dim light condition, shoot the second image of main part. The dark light condition mentioned here does not mean that the brightness of the environment needs to be reduced, but the image of the second camera module is dark light, optionally, in this embodiment, the capturing of the second image of the subject by the second camera module under the dark light condition may include: and the light sensitivity and/or the aperture of the second camera shooting module are/is reduced, so that the imaging brightness of the second camera shooting module is low. The lower the light sensitivity, the more exquisite the film is, and the stronger the integrity of the picture is; the aperture is a device for controlling the amount of light that passes through the lens and enters the light sensing surface of the camera module, and is usually disposed in the lens. The size of the expression aperture is expressed in terms of F-number, wherein,
f-value is the focal length F of the lens/the diameter d of the lens aperture; it can be seen that, when the F value of the aperture is larger, the diameter d of the lens aperture is smaller, that is, the clear aperture is smaller, and accordingly, the light-entering amount is smaller. When the amount of light entering the lens is smaller, the image obtained by shooting is darker, and the brightness is lower. Therefore, by adjusting the F value of the aperture, i.e., the actual aperture, to be small, and/or the sensitivity to be low, a darker image can be obtained.
In this embodiment, the first camera module and the second camera module respectively collect images of the shooting subject, and in the subsequent processing, the images of the shooting subject need to be stripped from the first image to obtain an object image; and stripping the image of the shooting subject from the first image, namely, the image of the shooting subject in the first image is stripped, other background or foreground parts are abandoned, and the stripped image of the shooting subject is reserved to form an object image. In the subject image, only the image of the subject exists to a large extent.
In S102, the step of separating the image of the subject from the first image may include: determining focal length information when the first image is acquired; determining the range of the image of the shooting subject in the first image according to the focal length information when the first image is obtained and the actual size of the shooting subject; the image of the subject is peeled from the first image. Focusing is necessarily required to be carried out when a clear image is shot, the focusing obviously focuses on a shot subject, and the image of the shot subject is the clearest in the obtained first image; the focal plane is not an absolute plane, and images in a certain range before and after the focal plane are considered to be clear if the definition allows, and the range is called as the depth of field. Because the first camera module and the second camera module are applied in the embodiment, the depth of field can be determined according to the images acquired by the two camera modules, namely the first image and the second image, the depth of field is determined, and the range of the image of the shooting subject in the first image can be determined by combining the size characteristics of the shooting subject. The size characteristic of the shooting subject, for example, if the shooting subject is a single person and the focused position is generally on the face, the depth of field can be the thickness of the human body; if the subject is a group, the depth of field is determined according to the position of each individual. Of course, in all cases, it is not necessary to ensure that the entire subject needs to have sufficient sharpness, and it may only be necessary to capture a part of the subject's image that needs sufficient sharpness, and these cases may determine the range of the subject's image in the first image according to the focal length information of the first image and the actual size of the subject.
In addition, in this embodiment, the peeling the image of the photographic subject from the first image may further include: the range of the image of the subject in the first image is determined by the chromaticity and/or luminance distribution in the first image. In addition to the range of the subject can be determined by the aforementioned focal length information, it can also be determined by the chromaticity information in the first picture, i.e., the color distribution in the first picture. The shooting main body and the background, or the shooting main body or the background, are spliced in color, the shooting main body and the background have more obvious color change, and when the shooting main body is stripped, the boundary between the shooting main body and the background can be determined by additionally referring to the color change in the first image, so that the range of the shooting main body is determined. And, the brightness change between the subject and the background can also be determined by the brightness information in the first image, i.e. the brightness distribution in the first image; the brightness of the subject is higher than the ambient brightness, and therefore, the range of the subject's image in the first image can be determined according to the change in brightness.
In S103, forming a subject image including the image of the subject after the image of the subject is peeled off from the first image; then, the object image and the second image are superimposed to obtain a composite image. The second image is shot under the dark light condition, the brightness of the picture is low, no matter the second image is the background or the image of the shot subject, the second image has dark tones, after the object image and the second image are overlapped, the background is still the background shot by the second image, the shot subject becomes the superposition between the object image and the image of the shot subject in the second image, and the image of the shot subject in the normal exposure and the image of the shot subject in the dark light condition are overlapped, so that the image of the shot subject is fuller, and the shot subject is highlighted by matching with the dark background. If the dark light condition of the second image is sufficiently dark, even a completely dark background can be captured, so that the subject image in the resulting composite image is very prominent. Because the first image and the second image are acquired simultaneously, the time deviation of the images does not occur during the composition, for example, the expression, the action and the like of a person are consistent in the first image and the second image. In addition, within the allowable range of the error, for example, the imaging speed of the mobile terminal is fast enough, the shooting can be completed by one camera module, that is, the second image is shot in a short enough time after the first image is shot, but because the first image and the second image are not collected simultaneously in this case, the time difference of the image picture is inevitably generated, thereby affecting the precision of the synthesis.
In addition, in this embodiment, after obtaining the object image, the method may further include: and performing feathering treatment on the object image. That is, the blurring process is performed on the edge portion of the object image. When the feathered object image is superimposed on the second image to obtain a composite image, the transition between the two images can be more natural. The larger the eclosion degree is, the larger the blurring range is, and the softer the color gradually changes; conversely, the smaller the degree of feathering, the smaller the extent of blurring. The specific feathering range may be determined according to the luminance and chrominance difference between the object image and the second image.
The embodiment provides a shooting method, wherein a first image of a main body is shot under a normal exposure condition through a first camera module, and a second image of the main body is shot under a dark light condition through a second camera module, wherein the brightness of the main body is greater than that of a background, and the actual distance between the main body and the background is greater than or equal to a preset distance; stripping the image of the shooting subject from the first image to obtain a target image, wherein the target image comprises the image of the shooting subject; and superposing the object image and the second image to form a composite image. Through the implementation of the embodiment, the contrast between the object image acquired under normal exposure and the brightness under the dark light condition is obvious, the position of the shooting main body in the synthetic image can be obviously highlighted by overlapping the object image and the dark image, and the shooting main body is true and natural.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A camera, comprising:
the shooting module is used for shooting a first image of the main body under a normal exposure condition through the first shooting module, and simultaneously shooting a second image of the main body under a dark light condition through the second shooting module; wherein, the brightness of the shooting subject is greater than that of the background, and the normal exposure condition is the exposure condition of the shooting subject in the composite image required by the user;
the peeling module is used for determining the range of the image of the shooting subject in the first image according to the focal length information when the first image is obtained and the actual size of the shooting subject, peeling the image of the shooting subject from the first image, and discarding other background or foreground parts to obtain an object image; the object image includes an image of the subject;
and the synthesis module is used for superposing the object image and the second image to obtain a synthesized image.
2. The camera of claim 1, wherein the peeling module is further to: and determining the range of the image of the shooting subject in the first image through the chromaticity and/or brightness distribution in the first image.
3. The camera of any one of claims 1-2, further comprising a feathering module to feather the subject image.
4. A mobile terminal characterized by comprising the camera according to any one of claims 1-3.
5. A photographing method, characterized by comprising:
acquiring a first image of a main body under a normal exposure condition through a first camera module; meanwhile, a second image of the shooting main body is collected through a second camera module under the dark light condition; wherein, the brightness of the shooting subject is greater than that of the background, and the normal exposure condition is the exposure condition of the shooting subject in the composite image required by the user;
determining the range of the image of the shooting subject in the first image according to the focal length information when the first image is obtained and the actual size of the shooting subject, stripping the image of the shooting subject from the first image, and discarding other background or foreground parts to obtain an object image; the object image includes an image of the subject;
and superposing the object image and the second image to form a composite image.
6. The photographing method according to claim 5, wherein the method further comprises: and determining the range of the image of the shooting subject in the first image through the chromaticity and/or brightness distribution in the first image.
7. The photographing method according to any one of claims 5 to 6, further comprising, after the obtaining of the subject image: and performing feathering treatment on the object image.
8. The method of claim 5, wherein capturing the second image of the subject in the dark by the second camera module comprises: and (3) adjusting the sensitivity and/or aperture of the second camera module to be low, so that the imaging brightness of the second camera module is low.
CN201710189250.8A 2017-03-27 2017-03-27 Shooting device, method and mobile terminal Active CN107071293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710189250.8A CN107071293B (en) 2017-03-27 2017-03-27 Shooting device, method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710189250.8A CN107071293B (en) 2017-03-27 2017-03-27 Shooting device, method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107071293A CN107071293A (en) 2017-08-18
CN107071293B true CN107071293B (en) 2020-07-31

Family

ID=59618330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710189250.8A Active CN107071293B (en) 2017-03-27 2017-03-27 Shooting device, method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107071293B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111512624A (en) * 2017-09-27 2020-08-07 深圳传音通讯有限公司 Double cameras and fusion imaging method thereof
CN108924435B (en) * 2018-07-12 2020-08-18 Oppo广东移动通信有限公司 Image processing method and device and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183206A (en) * 2006-11-13 2008-05-21 华晶科技股份有限公司 Method for calculating distance and actuate size of shot object
CN101290388A (en) * 2008-06-02 2008-10-22 北京中星微电子有限公司 Automatic focusing method and image collecting device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7209175B1 (en) * 1996-04-08 2007-04-24 Nikon Corporation Autofocus apparatus
US8810691B2 (en) * 2010-09-03 2014-08-19 Olympus Imaging Corp. Imaging apparatus, imaging method and computer-readable recording medium
CN103780840B (en) * 2014-01-21 2016-06-08 上海果壳电子有限公司 Two camera shooting image forming apparatus of a kind of high-quality imaging and method thereof
CN105100646B (en) * 2015-08-31 2018-09-11 北京奇艺世纪科技有限公司 Method for processing video frequency and device
CN105513105B (en) * 2015-12-07 2018-05-29 天津大学 Image background weakening method based on notable figure
CN106231195A (en) * 2016-08-15 2016-12-14 乐视控股(北京)有限公司 A kind for the treatment of method and apparatus of taking pictures of intelligent terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183206A (en) * 2006-11-13 2008-05-21 华晶科技股份有限公司 Method for calculating distance and actuate size of shot object
CN101290388A (en) * 2008-06-02 2008-10-22 北京中星微电子有限公司 Automatic focusing method and image collecting device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像处理技术的小零件尺寸检测系统开发;许二宁;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140615(第6期);I140-580 *

Also Published As

Publication number Publication date
CN107071293A (en) 2017-08-18

Similar Documents

Publication Publication Date Title
CN110072070B (en) Multi-channel video recording method, equipment and medium
CN112532859B (en) Video acquisition method and electronic equipment
JP6803982B2 (en) Optical imaging method and equipment
WO2017071559A1 (en) Image processing apparatus and method
CN111183632A (en) Image capturing method and electronic device
CN114092364B (en) Image processing method and related device
CN113727016A (en) Shooting method and electronic equipment
CN113810600B (en) Terminal image processing method and device and terminal equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN110430357B (en) Image shooting method and electronic equipment
CN113596316B (en) Photographing method and electronic equipment
CN105407295B (en) Mobile terminal filming apparatus and method
CN113660408B (en) Anti-shake method and device for video shooting
CN113452898A (en) Photographing method and device
CN113810590A (en) Image processing method, electronic device, medium, and system
CN107071277B (en) Optical drawing shooting device and method and mobile terminal
WO2020158070A1 (en) Imaging device, imaging method, and program
CN107071293B (en) Shooting device, method and mobile terminal
CN108156392B (en) Shooting method, terminal and computer readable storage medium
CN114143471B (en) Image processing method, system, mobile terminal and computer readable storage medium
WO2021017518A1 (en) Electronic device and image processing method
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN116782023A (en) Shooting method and electronic equipment
CN117880628A (en) Shooting method and related equipment thereof
CN106993138B (en) Time-gradient image shooting device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant