WO2022227893A1 - Image photographing method and device, terminal and storage medium - Google Patents

Image photographing method and device, terminal and storage medium Download PDF

Info

Publication number
WO2022227893A1
WO2022227893A1 PCT/CN2022/080664 CN2022080664W WO2022227893A1 WO 2022227893 A1 WO2022227893 A1 WO 2022227893A1 CN 2022080664 W CN2022080664 W CN 2022080664W WO 2022227893 A1 WO2022227893 A1 WO 2022227893A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection
image
lens module
glass cover
terminal
Prior art date
Application number
PCT/CN2022/080664
Other languages
French (fr)
Chinese (zh)
Inventor
邵明天
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022227893A1 publication Critical patent/WO2022227893A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image capturing method, device, terminal and storage medium.
  • Two sides or four sides of the curved screen are curved, and this kind of uneven screen glass cover may easily affect the image capture of the front camera disposed under the curved screen.
  • the front camera is placed below a relatively flat area in the curved screen as much as possible, so as to avoid the influence of the curved screen on the image capture of the front camera.
  • the embodiments of the present application provide an image capturing method, device, terminal and storage medium, which can improve the imaging quality of image capturing through image correction.
  • the technical solution is as follows.
  • an image capturing method is provided, which is applied to a terminal, where the terminal includes a lens module disposed under a screen glass cover, and the method includes:
  • a corrected image obtained after the original image is corrected is output.
  • an image capturing device comprising: a face shape information acquisition module, an image correction module, and an image output module;
  • the surface shape information acquisition module is used to obtain the surface shape information of the screen glass cover
  • the image correction module is used to perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover;
  • the image output module is configured to output the corrected image obtained after the original image is corrected.
  • a computer device comprising: a processor and a memory, wherein the memory stores at least one instruction, at least a piece of program, a code set or an instruction set, the at least one The instructions, the at least one piece of program, the code set or the instruction set are loaded and executed by the processor to implement the image capturing method as described above.
  • a computer-readable storage medium stores at least one instruction, at least one piece of program, code set or instruction set, the at least one instruction, the at least one piece of program ,
  • the code set or the instruction set is loaded and executed by the processor to implement the image capturing method described in the above aspect.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the image capturing method provided in the foregoing optional implementation manner.
  • a chip is provided, the chip includes a programmable logic circuit and/or program instructions, and when the chip runs, it is used to implement the image capturing method described in the above aspect.
  • FIG. 1 is a schematic diagram of an image capturing method provided by an exemplary embodiment of the present application
  • FIG. 2 is a flowchart of an image capturing method provided by an exemplary embodiment of the present application
  • FIG. 3 is a schematic diagram of storing surface type information of a screen glass cover plate provided by an exemplary embodiment of the present application
  • FIG. 4 is a flowchart of an image capturing method provided by an exemplary embodiment of the present application.
  • FIG. 5 is a schematic diagram of fringe light detection provided by an exemplary embodiment of the present application.
  • FIG. 6 is a flowchart of an image capturing method provided by an exemplary embodiment of the present application.
  • FIG. 7 is a flowchart of an image capturing method provided by an exemplary embodiment of the present application.
  • FIG. 8 is a schematic diagram of a lens module provided by an exemplary embodiment of the present application.
  • FIG. 9 is a block diagram of an image capturing apparatus provided by an exemplary embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a terminal provided by an exemplary embodiment of the present application.
  • the related technology solves the problem from the perspective of hardware design, for example, the front camera is designed below a relatively flat area in the curved screen.
  • the lens module includes: a stripe emitting end, a front camera and an image sensor.
  • the front camera is used for front image shooting; the stripe transmitter and the image sensor are used for stripe light detection.
  • the stripe transmitter is used to transmit the detection stripe emission signal
  • the image sensor is used to receive the detection stripe reflection signal
  • the image sensor is used to receive the detection stripe reflection signal.
  • It can be a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) receiver.
  • CMOS complementary Metal Oxide Semiconductor
  • the image sensor when the terminal uses the lens module to capture images, the image sensor can obtain the detection fringe reflection signal, and feed back the detection fringe reflection signal to the Central Processing Unit (CPU) processor in the terminal.
  • CPU Central Processing Unit
  • the image sensor under the condition that the detection fringe emission signal emitted by the fringe emission end is fixed, the image sensor receives the detection fringe reflection signal formed by the reflection of the object to be photographed, and sends the detection fringe reflection signal to the CPU processor, and the CPU processor Based on the detection stripe emission signal and the detection stripe reflection signal, the stripe state change can be determined, and the stripe state change includes: the change of the stripe width, the change of the stripe spacing, and the like.
  • the CPU processor calculates the shooting distance from the object to the front camera by obtaining the time difference between the detection stripe emission signal emitted by the stripe transmitter and the detection stripe reflection signal received by the image sensor, and feeds back the shooting distance to the front camera.
  • the front camera adjusts the focus based on the shooting distance fed back by the CPU processor, takes a picture after focusing, and feeds back the original image captured to the CPU processor.
  • the CPU processor calculates the depth information of the captured original image through the change of the stripe state, and performs image segmentation on the original image based on the depth information.
  • the CPU processor first divides the person and the background in the original image based on the depth information, and then divides the person into different regions based on the depth information, and the precision of the segmentation may be higher than 1 mm.
  • the CPU processor calls the surface type information of the screen glass cover stored in the Electrically Erasable Programmable Read-Only Memory (EEPROM) of the front camera, based on the screen glass cover
  • the surface shape information of the board performs deconvolution processing on each segmented area, so that the influence of the curvature of the screen glass cover on the image is eliminated in each area.
  • the CPU processor processes the image. Synthesis, so as to obtain a complete image corrected image, and image output of the corrected image.
  • Fig. 2 shows the flow chart of the image capturing method provided by an exemplary embodiment of the present application, and the method can be applied in a terminal, and the terminal includes a lens module arranged under a screen glass cover, and the method includes:
  • step 201 the surface shape information of the screen glass cover is obtained.
  • the terminal obtains the pre-stored surface shape information of the screen glass cover in front of the lens module, or obtains the surface shape information of the screen glass cover by real-time detection.
  • the terminal when the lens module in the terminal is in a shooting state, the terminal performs real-time detection to obtain the surface shape information of the screen glass cover.
  • the lens module is in the shooting state refers to the state that the lens module is in the process of framing and has not yet completed the shooting.
  • the user enters the camera function page, clicks the icon of camera conversion, switches the terminal to use the front camera, the terminal uses the front camera to view the scene, and obtains the current face shape information of the screen glass cover.
  • the terminal reads the pre-stored surface shape information of the screen glass cover from a memory (eg, EEPROM).
  • a memory eg, EEPROM
  • the surface shape information of the screen glass cover in the memory may be pre-stored before the terminal is assembled, or may be stored after the screen glass cover is detected after the terminal is assembled.
  • the lens module is disposed under the screen glass cover, and the lens module passes through the screen glass cover to receive light from the area in front of the screen glass cover, and performs optical imaging on the image in the area. That is, the lens module in the embodiment of the present application is not a lens module in the following form: a design such as lifting is adopted, and light can be received without passing through the screen glass cover.
  • the lens module can be understood as a lens module for front-facing camera.
  • the surface shape information of the screen glass cover is information for indicating the surface shape of the screen glass cover.
  • the surface shape information of the screen glass cover can also be understood as the curvature information of the screen glass cover, the influence information of the screen glass cover on imaging, and the like.
  • Step 202 Perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover.
  • the terminal uses the lens module to capture the original image, and invokes the obtained surface shape information of the screen glass cover to perform image correction on the original image.
  • the terminal device obtains the surface shape information of the screen glass cover through real-time detection
  • the terminal stores the surface shape information of the screen glass cover in the lens module after obtaining the surface shape information of the screen glass cover.
  • the corresponding memory such as EEPROM, calls the surface shape information of the screen glass cover from the memory corresponding to the lens module after capturing the original image.
  • the lens module includes an image sensor and a front-facing camera, the image sensor feeds back the received detection reflection signal to the CPU processor, and the CPU processor calculates based on the detection reflection signal fed back by the image sensor.
  • the surface shape information of the screen glass cover plate is transmitted to the front camera EEPROM and stored in the front camera EEPROM.
  • the terminal may use the surface shape information of the screen glass cover to perform image correction on the original image captured within a period of time, that is, The surface type information of the screen glass cover is updated at a fixed frequency.
  • the terminal stores the surface shape information of the screen glass cover, and uses the surface shape information of the screen glass cover in the next month. Image rectification is performed on the captured original image.
  • the surface shape information of the screen glass cover is automatically updated every time the terminal performs image capture.
  • the terminal uses the lens module to capture images, obtains the surface shape information of the screen glass cover during framing, captures the image after the framing is completed, and obtains the original image, and uses the surface of the screen glass cover obtained this time.
  • the original image is corrected using the type information.
  • the terminal detects the curvature of the screen glass cover after filming. , get the latest face shape information and apply it to image correction in actual photography.
  • Step 203 output the corrected image obtained after the original image is corrected.
  • the terminal rectifies the original image into a rectified image, and outputs the rectified image.
  • the method provided in this embodiment performs image correction on the original image captured by the lens module under the screen glass cover by obtaining the surface shape information of the screen glass cover, so as to avoid the lens module being affected by the curved screen.
  • it can improve the image quality of image shooting.
  • the hardware of the terminal it is not necessary to consider the influence of the bending degree of the screen glass cover on the setting position of the camera assembly, which will affect the central plain of the terminal. There are components arranged, thereby reducing the implementation complexity of the terminal.
  • the terminal performs image correction on the original image based on the depth information of the original image, so as to obtain a better image correction effect.
  • the method can be applied to a terminal.
  • the terminal includes a lens module disposed under a screen glass cover, and the method includes:
  • step 401 the surface shape information of the screen glass cover is obtained.
  • step 201 For the implementation manner of this step, refer to the above-mentioned step 201, which will not be repeated here.
  • Step 402 acquiring depth information of the original image.
  • the depth information of the original image refers to the three-dimensional information of the photographed object in the original image in the three-dimensional world.
  • the lens module in the terminal uses machine vision technology to obtain depth information of the original image.
  • the machine vision technology may include: structured light (Structured-light) detection technology and time of flight (Time of Flight, ToF) method, this The application embodiments do not limit this.
  • the time-of-flight method of light refers to a detection method that obtains the depth information of the photographed object by measuring the time when the light irradiates the photographed object and returns.
  • the structured light detection technology refers to a detection method in which depth information is obtained by projecting a specific coding pattern onto the object to be photographed, converting the depth information into the change of the coding pattern, and detecting the change of the coding pattern.
  • the structured light detection technology includes: fringe light detection and speckle pattern detection.
  • the coding pattern projected in the fringe light detection is a fringe pattern
  • the coding pattern projected in the speckle pattern detection is a speckle pattern.
  • the terminal obtains the depth information of the original image through the following stripe light detection methods:
  • the lens module sends a first detection stripe emission signal.
  • the lens module includes a stripe transmitting end, and the terminal calls the stripe transmitting end to transmit the first detection stripe transmitting signal.
  • the lens module receives a first detection fringe reflection signal, where the first detection fringe reflection signal is a signal formed by the reflection of the first detection fringe emission signal by the object to be photographed.
  • the lens module includes an image sensor, and the first detection fringe emission signal transmits through the screen glass cover and is projected onto the surface of the object to be photographed, and is reflected by the object to form a first detection fringe reflection signal, and the first detection fringe emits a signal.
  • the signal is received by the image sensor.
  • the spacing and width of the stripes between the reflected signal of the first detection stripe and the emission signal of the first detection stripe change, and the terminal calculates and obtains the depth information of the original image based on the above changes.
  • the fringe pattern 501 (ie, the first detection fringe emission signal) is projected onto the object 502, and it can be found that the original vertical fringe pattern in the fringe pattern 501 is distorted, and the spacing and width of the fringes, etc.
  • the fringe pattern 501 is modulated by the height of the object 502 , and the distorted fringe shape of the fringe pattern 503 (ie, the reflection signal of the first detection fringe) contains the depth information of the object 502 .
  • the terminal obtains the depth information of the original image based on the change between the first detection fringe reflection signal and the first detection fringe emission signal: the terminal obtains the depth information of the original image based on the first detection fringe reflection signal and the first detection fringe emission signal. The changes between them are fitted and calculated to obtain the depth information of the original image.
  • the fitting method used in the fitting calculation may be Gaussian polynomial fitting or other fitting scheme, which is not limited in this embodiment of the present application.
  • Step 403 Segment the original image into at least one image area based on the depth information.
  • the terminal divides pixels with similar depths into an image area based on the depth information of the image.
  • the terminal divides a part of the original image into at least one image area based on the depth information. In another possible implementation manner, the terminal divides the entire original image into at least one image area based on the depth information.
  • the original image includes a person and a background.
  • the terminal first divides the person and the background based on the depth information, and then further divides the person and the background into smaller image areas based on the depth information.
  • the original image includes a person and a background
  • the terminal first divides the person and the background based on the depth information, and then divides the person part in the original image into at least one image area based on the depth information.
  • Step 404 Perform image correction on at least one image area based on the surface shape information of the screen glass cover.
  • the terminal uses the surface shape information of the screen glass cover to perform deconvolution processing on at least one image area to perform image correction.
  • the original image output by the lens module can be regarded as the result of convolution of the real image and the surface of the screen glass cover. Therefore, deconvolution processing is performed on the original image corresponding to at least one image area by using the surface shape information of the screen glass cover, and the real image corresponding to the image area can be obtained.
  • the deconvolution process is based on a cross-channel prior.
  • Cross-channel prior refers to sharing the information of different channels during the deconvolution process, so that the frequency information retained by one channel can help other channels to reconstruct and eliminate chromatic aberration.
  • the chromatic aberration correction is added to the cross-channel prior to reduce the blurring and color fringing caused by the chromatic aberration during the image correction process, so as to achieve high-quality imaging.
  • Step 405 synthesizing the original image and at least one image area after image correction to obtain a corrected image.
  • the terminal after performing image correction on at least one image area respectively, performs image synthesis on the uncorrected part of the original image and at least one image area after image correction, thereby obtaining a corrected image.
  • the terminal divides the character part in the original image into an image area, performs image correction on the image area corresponding to the character part, and then combines the corrected character part with other parts in the original image to obtain a corrected image.
  • the terminal synthesizes the at least one image area after image correction to obtain a corrected image.
  • Step 406 output the corrected image.
  • the terminal obtains the depth information of the original image, divides the original image into multiple image areas based on the depth information of the original image, and then corrects each image area separately.
  • the correction method for correcting the original image can improve the accuracy of image correction due to the smaller granularity of the calculation.
  • the surface shape information of the glass cover plate of the screen is acquired by the terminal based on the detection of the machine vision technology.
  • the machine vision technology may include: structured light detection technology and light time-of-flight method, which are not limited in this embodiment of the present application.
  • the structured light detection technology includes: fringe light detection and speckle pattern detection.
  • FIG. 6 shows a flowchart of an image capturing method provided by an exemplary embodiment of the present application.
  • the method can be applied to a terminal.
  • the terminal includes a lens module disposed under a screen glass cover, and the method includes:
  • step 601 the lens module is called to perform stripe light detection, and the surface shape information of the screen glass cover is obtained.
  • the fringe light detection refers to the detection method in which the reflection test is carried out through the detection fringe signal with the determined fringe information. Determining the fringe information means that in the fringe pattern corresponding to the detection fringe signal, the spacing and width of the fringes are fixed.
  • the terminal performs fringe light detection in the following manner:
  • the lens module sends a second detection stripe emission signal.
  • the lens module includes a stripe transmitting end, and the terminal invokes the stripe transmitting end to transmit the second detection stripe transmitting signal.
  • the second detection fringe emission signal and the first detection fringe emission signal described in the above embodiments may be two different parts of the detection fringe emission signal emitted by the fringe emission end at the same time point: the first part That is, the first detection stripe emits signal, and this part of the signal passes through the screen glass cover and is projected onto the surface of the object to be photographed; the second part is the second detection stripe emission signal, and this part of the signal does not pass through the screen glass cover.
  • the lens module receives a second detection stripe reflection signal, where the second detection stripe reflection signal is a signal formed by the second detection stripe emission signal reflected by the screen glass cover.
  • the lens module includes an image sensor, the second detection stripe emission signal is reflected by the screen glass cover to form a second detection stripe reflection signal, and the second detection stripe reflection signal is received by the image sensor.
  • the spacing and width of the stripes between the reflection signal of the second detection stripe and the emission signal of the second detection stripe change, and the terminal calculates the surface shape information of the screen glass cover based on the above changes.
  • the terminal acquires the surface shape information of the screen glass cover based on the change between the second detection stripe reflection signal and the second detection stripe emission signal as follows: the terminal is based on the second detection stripe reflection signal and the second detection stripe reflection signal. The variation between the fringe emission signals is fitted and calculated to obtain the surface shape information of the screen glass cover.
  • the fitting method used in the fitting calculation may be Gaussian polynomial fitting or other fitting scheme, which is not limited in this embodiment of the present application.
  • Step 602 Perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover.
  • step 202 For the implementation manner of this step, refer to the above-mentioned step 202, which will not be repeated here.
  • Step 603 output the corrected image obtained after the original image is corrected.
  • step 203 For the implementation of this step, refer to the above-mentioned step 203, which will not be repeated here.
  • the terminal obtains accurate surface shape information of the screen glass cover by detecting the stripe light detection and detecting the change between the stripe reflection signal and the detection stripe emission signal.
  • the face shape information of the screen glass cover is used for image correction, it is beneficial to improve the effect of image correction.
  • the terminal adjusts the focus of the lens module before capturing the image, thereby improving the imaging quality of the image.
  • the method can be applied to a terminal.
  • the terminal includes a lens module disposed under a screen glass cover, and the method includes:
  • Step 701 Obtain the surface shape information of the screen glass cover.
  • step 201 For the implementation manner of this step, refer to the above-mentioned step 201, which will not be repeated here.
  • Step 702 Obtain a reference distance, where the reference distance is the distance between the object to be photographed and the lens module.
  • the terminal When the lens module is framing, the terminal obtains the distance between the photographed object and the lens module, and performs focusing based on the distance.
  • the reference distance is detected by the terminal based on machine vision technology.
  • the machine vision technology may include: structured light detection technology and light time-of-flight method, which are not limited in this embodiment of the present application.
  • the structured light detection technology includes: fringe light detection and speckle pattern detection.
  • the terminal obtains the reference distance through stripe light detection:
  • the lens module includes a stripe transmitting end, and the terminal calls the stripe transmitting end to transmit a third detection stripe transmitting signal.
  • the third detection fringe emission signal and the first detection fringe emission signal described in the above embodiments may be the same signal.
  • the lens module is called to receive the reflection signal of the third detection stripe, and the reflection signal of the third detection stripe is a signal formed by the emission signal of the third detection stripe reflected by the object to be photographed.
  • the lens module includes an image sensor, and the third detection stripe emission signal transmits through the screen glass cover and is projected onto the surface of the object to be photographed, and is reflected by the object to form a third detection stripe reflection signal, and the third detection stripe reflection signal. received by the image sensor.
  • the reflection signal of the third detection fringe and the emission signal of the first detection fringe described in the above embodiments may be the same signal.
  • the first round-trip time is the difference between the time point when the lens module transmits the emission signal of the third detection stripe and the time point when the reflection signal of the third detection stripe is received.
  • the reflection signal of the third detection fringe is formed by the reflection of the emission signal of the third detection fringe by the object to be photographed, and the reflection signal of the third detection fringe is also received by the lens module.
  • the time difference between transmission and reception (ie, the first round-trip time) and the propagation speed of the signal can be used to calculate the distance between the object to be photographed and the lens module.
  • the first round-trip time can receive the reflection of the third detection fringe through the lens module. It is measured by the difference between the time point of the signal and the time point when the second detection stripe reflection signal is received, wherein the second detection stripe reflection signal is a signal formed by the second detection stripe emission signal reflected by the screen glass cover. This is because the distance between the screen glass cover and the lens module is very close, and this distance has little effect on the value of the reference distance, so the terminal transmits the third detection stripe transmission signal and the second detection stripe transmission signal at the same time. In this case, the terminal may equate the difference between the time point when the lens module receives the reflection signal of the third detection fringe and the time point when the reflection signal of the second detection stripe is received as the first round-trip time.
  • Step 703 focusing on the lens module based on the reference distance.
  • the lens module adopts a zoom lens
  • the terminal uses a focusing motor to drive the lens in the lens module to an ideal position to complete focusing, wherein the ideal position is when the lens supports The position of the lens when the object to be photographed at the current reference distance achieves the ideal focusing effect.
  • the terminal will compare the reference distance with the ideal reference distance range. If the reference distance is not within the ideal reference distance range, it will be displayed on the terminal. Prompt information.
  • the prompt information is used to prompt the user to adjust the distance between the object to be photographed and the terminal.
  • the ideal reference distance range is the distance between the object to be photographed and the lens module when the lens module achieves an ideal focusing effect. corresponding distance range. Exemplarily, if the reference distance is greater than the ideal reference distance range, the terminal prompts the user to shorten the distance between the photographed object and the terminal; if the reference distance is less than the ideal reference distance range, the terminal prompts the user to increase the distance between the photographed object and the terminal. distance.
  • the reference distance can be one value or multiple values.
  • the reference distance is the distance between a point of the object to be photographed and the lens module; when the reference distance is multiple values, the reference distance is the distance between multiple points of the object to be photographed and the lens module. the distance.
  • the terminal in response to obtaining the at least two reference distances, processes the at least two reference distances based on the attention mechanism to obtain the target reference distance; and adjusts the focus of the lens module based on the target reference distance.
  • Processing at least two reference distances based on the attention mechanism refers to a processing method of assigning different weights to at least two reference distances, and then performing weighting.
  • the terminal assigns weights to different reference distances based on the positions of the points of the photographed objects corresponding to the reference distances in the image. For example, if the point corresponding to the first reference distance is in the middle of the image, and the point corresponding to the second reference distance is at the edge of the image, the weight of the first reference distance is higher than that of the second reference distance.
  • the terminal assigns weights to different reference distances based on properties of points of the object to be photographed corresponding to the reference distances. For example, if the point corresponding to the first reference distance belongs to a person, and the point corresponding to the second reference distance belongs to the background, the weight of the first reference distance is higher than that of the second reference distance.
  • the terminal assigns weights to different reference distances based on whether the object to be photographed corresponding to the reference distance is selected by the user. For example, if the object to be photographed corresponding to the first reference distance is selected by the user, and the object to be photographed corresponding to the second reference distance is not selected by the user, the weight of the first reference distance is higher than that of the second reference distance.
  • Step 704 using the adjusted lens module to capture an image to obtain an original image.
  • the lens module is used to capture the image, and the original image captured at this time corresponds to a better focusing effect.
  • Step 705 Perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover.
  • step 202 For the implementation manner of this step, refer to the above-mentioned step 202, which will not be repeated here.
  • Step 706 output the corrected image obtained after the original image is corrected.
  • step 203 For the implementation of this step, refer to the above-mentioned step 203, which will not be repeated here.
  • the terminal obtains the reference distance between the object to be photographed and the lens module before capturing the image, and uses the reference distance to adjust the focus of the lens module, thereby providing an auxiliary focusing method. implementation, thereby improving the imaging quality of the image.
  • FIG. 8 shows a schematic diagram of a lens module provided by an exemplary embodiment of the present application.
  • the lens module includes a camera 801 , a detection signal transmitting end 802 and an image sensor 803 .
  • the detection signal transmitting end 802 and the image sensor 803 are symmetrically arranged with the optical axis of the camera 801 as the axis of symmetry.
  • the optical axis of the detection signal emitting end 802 and the central axis of the photosensitive surface of the image sensor 803 are symmetrically arranged with the optical axis of the camera 801 as the axis of symmetry.
  • the detection signal transmitter 802 and the image sensor 803 can be set independently, and there is no fixed connection between the two devices; the detection signal transmitter 802 and the image sensor 803 can also be set in a unified manner, and the two devices are fixedly connected.
  • the two devices form a U-shaped structure and surround both sides of the camera 801 .
  • the camera 801 is used for image capturing.
  • the camera 801 is a front camera.
  • the sounding signal transmitting end 802 is used for transmitting sounding and transmitting signals.
  • the detection signal transmitter 802 is used for transmitting detection stripe transmission signals, and the detection signal transmitter 802 is a kind of stripe transmitter.
  • the detection signal transmitter 802 is configured to transmit a detection speckle transmitter signal, and the detection signal transmitter 802 is a speckle pattern transmitter.
  • the detection signal transmitting end 802 is a liquid crystal display (Liquid Crystal Display, LCD) screen.
  • the wavelength of the detection signal emitted by the detection signal transmitting end 802 is in the non-visible light range, such as infrared light.
  • the image sensor 803 is used to receive the detection reflection signal, and the detection reflection signal is a signal formed by the reflection of the detection emission signal.
  • the detection reflection signal received by the image sensor 803 is a detection fringe reflection signal.
  • the detection reflection signal received by the image sensor 803 is a detection speckle reflection signal.
  • the pixels of the detection target surface of the image sensor 803 are not less than 2000*2000, and the size of a single pixel is 2um-4um.
  • the image sensor 803 is an area array image sensor, and the area array image sensor supports detecting the distances between multiple points of the photographed object and the lens module, that is, the area array image sensor supports obtaining multiple reference distances.
  • the probe signal transmitter 802 transmits a probe fringe transmission signal with the determined fringe information.
  • the detection stripe reflection signal corresponding to the detection stripe emission signal is absorbed by the image sensor 803 .
  • a part of the detection fringe emission signal (that is, the second detection fringe emission signal) is reflected by the screen glass cover 804 of the terminal to form a second detection fringe reflection signal.
  • the second detection fringe reflection signal is absorbed by the image sensor 803 and sent to the terminal in the terminal.
  • the CPU processor calculates the surface shape information of the screen glass cover 804 according to the stripe changes, mainly including the curvatures in the X and Y directions, thereby constructing a three-dimensional model of the screen glass cover 804 .
  • the other part of the detection fringe emission signal passes through the screen glass cover 804, transmits to the surface of the object to be photographed 805, and is reflected to form the first detection fringe reflection signal, and the first detection fringe reflection signal Absorbed by the image sensor 803 and sent to the CPU processor in the terminal, the CPU processor also calculates the depth information of the photographed object 805 according to the change of the stripes.
  • the CPU processor can also calculate the reference distance between the photographed object 805 and the lens module according to the time difference between the first detection fringe reflection signal and the second detection fringe reflection signal received by the image sensor 803 .
  • the screen glass cover 804 is a curved screen glass cover, and the above-mentioned lens module is disposed under the curved screen glass cover.
  • the terminal may be a curved screen terminal.
  • the image capturing method shown in this application can also be applied to the terminal of this type.
  • the lens module can perform stripe light detection through symmetrically arranged stripe emitters and image sensors, and the stripe light detection scheme can be applied to the front of mobile phones. Calculate the surface information of the screen glass cover above the lens module, compensate and correct the aberration through the algorithm, and correct the original image obtained by shooting; the second is to detect and calculate the distance and depth information of the photographed object to achieve more accurate Facial recognition and focus.
  • FIG. 9 shows a schematic structural diagram of an image capturing apparatus provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or a part of the terminal through software, hardware or a combination of the two, and the device includes: a face shape information acquisition module 901, an image correction module 902 and an image output module 903;
  • the surface type information acquisition module 901 is used to acquire the surface type information of the screen glass cover
  • the image correction module 902 is configured to perform image correction on the original image captured by the lens module based on the face shape information of the screen glass cover;
  • the image output module 903 is configured to output the corrected image obtained after the original image is corrected.
  • the image correction module 902 is configured to acquire depth information of the original image; based on the depth information, segment the original image into at least one image area; based on the screen glass
  • the face shape information of the cover plate is used to perform image correction on the at least one image area
  • the image output module 903 is used for synthesizing the original image and the corrected at least one image area to obtain the corrected image ; output the corrected image.
  • the image correction module 902 is used for the lens module to obtain the depth information of the original image based on the structured light detection technology; or, the image correction module 902 is used for all The lens module obtains the depth information of the original image based on the time-of-flight method of light.
  • the image correction module 902 is used for the lens module to send a first detection fringe emission signal; the lens module receives a first detection fringe reflection signal, the first detection fringe
  • the reflected signal is a signal formed by the reflection of the first detection fringe emission signal by the object to be photographed; the original image is acquired based on the change between the first detection fringe reflection signal and the first detection fringe emission signal depth information.
  • the image correction module 902 is configured to perform deconvolution processing on the at least one image area by using the face shape information of the screen glass cover to perform image correction.
  • the surface type information acquisition module 901 is used for the lens module to acquire the surface type information of the screen glass cover plate based on the structured light detection technology; or, the surface type information The obtaining module 901 is used for the lens module to obtain the surface shape information of the screen glass cover based on the time-of-flight method of light.
  • the surface information acquisition module 901 is used for the lens module to send the second detection fringe emission signal; the lens module receives the second detection fringe reflection signal, the second detection fringe reflection signal, the second detection fringe reflection signal.
  • the detection stripe reflection signal is a signal formed by the reflection of the second detection stripe emission signal by the screen glass cover; based on the change between the second detection stripe reflection signal and the second detection stripe emission signal, Obtain the surface shape information of the screen glass cover.
  • the device further includes a focusing module; the focusing module is configured to obtain a reference distance, where the reference distance is the distance from the object to be photographed to the lens module; based on the Focusing on the lens module with reference to the distance; using the focus-adjusted lens module to capture an image to obtain the original image.
  • a focusing module is configured to obtain a reference distance, where the reference distance is the distance from the object to be photographed to the lens module; based on the Focusing on the lens module with reference to the distance; using the focus-adjusted lens module to capture an image to obtain the original image.
  • the focusing module is used for the lens module to obtain the reference distance based on structured light detection technology; or, the focusing module is used for the lens module to obtain the reference distance based on The time-of-flight method is used to obtain the reference distance.
  • the focusing module is used for the lens module to send a third detection fringe emission signal; the lens module receives a third detection fringe reflection signal, and the third detection fringe reflects
  • the signal is a signal formed by the reflection of the third detection stripe emission signal by the object to be photographed; the reference distance is determined based on the first round-trip time; wherein, the first round-trip time is the transmission time of the lens module. The difference between the time point when the third detection stripe transmits the signal and the time point when the third detection stripe reflection signal is received.
  • the focusing module is configured to, in response to obtaining the at least two reference distances, process the at least two reference distances based on an attention mechanism to obtain the target reference distance; According to the target reference distance, focus on the lens module.
  • FIG. 10 shows a structural block diagram of a terminal 1000 provided by an exemplary embodiment of the present application.
  • the terminal 1000 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, a moving picture expert compression standard audio layer 3), an MP4 (Moving Picture Experts Group Audio Layer IV, a dynamic image expert Video Expert Compresses Standard Audio Layer 4) Player, Laptop or Desktop.
  • Terminal 1000 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and the like by other names.
  • the terminal 1000 includes: a processor 1001 , a memory 1002 , a peripheral device interface 1003 and a lens module 1006 .
  • the processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1001 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 1001 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in a standby state.
  • the processor 1001 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1001 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. Memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1002 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1001 to realize the image capturing provided by the method embodiments in this application. method.
  • the peripheral device interface 1003 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1001 and the memory 1002 .
  • processor 1001, memory 1002, and peripherals interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 1001, memory 1002, and peripherals interface 1003 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the processor 1001, the memory 1002 and the peripheral device interface 1003 may be connected through a bus or a signal line.
  • the lens module 1006 can be connected to the peripheral device interface 1003 through a bus, a signal line or a circuit board.
  • the lens module 1006 is used to capture images or videos.
  • the lens module 1006 includes: a camera, a detection signal transmitter and an image sensor that are symmetrically arranged with the optical axis of the camera as an axis of symmetry; the camera is used for image capturing; the detection signal transmitter is used for The image sensor is used to receive a detection reflection signal, and the detection reflection signal is a signal formed by the reflection of the detection transmission signal.
  • the cameras include a front-facing camera and a rear-facing camera. Usually, the front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal.
  • the lens module 1006 may also include a flash.
  • the flash can be a single color temperature flash or a dual color temperature flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • the terminal 1000 further includes other peripheral devices other than the lens module 1006 .
  • Each peripheral device can be connected to the peripheral device interface 1003 through a bus, a signal line or a circuit board.
  • other peripheral devices include: at least one of a radio frequency circuit 1004 , a display screen 1005 , an audio circuit 1007 , a positioning component 1008 and a power supply 1009 .
  • the radio frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1004 communicates with the communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1004 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 1004 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit 1004 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G and 5G), wireless local area network and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.
  • the radio frequency circuit 1004 may further include a circuit related to NFC (Near Field Communication, short-range wireless communication), which is not limited in this application.
  • the display screen 1005 is used for displaying UI (User Interface, user interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the display screen 1005 also has the ability to acquire touch signals on or above the surface of the display screen 1005 .
  • the touch signal can be input to the processor 1001 as a control signal for processing.
  • the display screen 1005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 1005 there may be one display screen 1005, which is arranged on the front panel of the terminal 1000; in other embodiments, there may be at least two display screens 1005, which are respectively arranged on different surfaces of the terminal 1000 or in a folded design; In other embodiments, the display screen 1005 may be a flexible display screen, which is disposed on a curved surface or a folding surface of the terminal 1000 . Even, the display screen 1005 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 1005 can be prepared by using materials such as LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light emitting diode).
  • Audio circuitry 1007 may include a microphone and speakers.
  • the microphone is used to collect the sound waves of the user and the environment, convert the sound waves into electrical signals, and input them to the processor 1001 for processing, or to the radio frequency circuit 1004 to realize voice communication.
  • the microphone may also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 1001 or the radio frequency circuit 1004 into sound waves.
  • the loudspeaker can be a traditional thin-film loudspeaker or a piezoelectric ceramic loudspeaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes.
  • the audio circuit 1007 may also include a headphone jack.
  • the positioning component 1008 is used to locate the current geographic location of the terminal 1000 to implement navigation or LBS (Location Based Service).
  • the positioning component 1008 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China or the Galileo system of Russia.
  • the power supply 1009 is used to power various components in the terminal 1000 .
  • the power source 1009 may be alternating current, direct current, disposable batteries or rechargeable batteries.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. Wired rechargeable batteries are batteries that are charged through wired lines, and wireless rechargeable batteries are batteries that are charged through wireless coils.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 1000 further includes one or more sensors 1010 .
  • the one or more sensors 1010 include, but are not limited to, an acceleration sensor 1011 , a gyro sensor 1012 , a pressure sensor 1013 , a fingerprint sensor 1014 , an optical sensor 1015 and a proximity sensor 1016 .
  • the acceleration sensor 1011 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 1000 .
  • the acceleration sensor 1011 can be used to detect the components of the gravitational acceleration on the three coordinate axes.
  • the processor 1001 can control the display screen 1005 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011 .
  • the acceleration sensor 1011 can also be used for game or user movement data collection.
  • the gyroscope sensor 1012 can detect the body direction and rotation angle of the terminal 1000 , and the gyroscope sensor 1012 can cooperate with the acceleration sensor 1011 to collect 3D actions of the user on the terminal 1000 .
  • the processor 1001 can implement the following functions according to the data collected by the gyro sensor 1012: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1013 may be disposed on the side frame of the terminal 1000 and/or the lower layer of the display screen 1005 .
  • the processor 1001 performs left and right hand identification or shortcut operations according to the holding signal collected by the pressure sensor 1013.
  • the processor 1001 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 1005.
  • the operability controls include at least one of button controls, scroll bar controls, icon controls, and menu controls.
  • the fingerprint sensor 1014 is used to collect the user's fingerprint, and the processor 1001 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user's identity according to the collected fingerprint. When the user's identity is identified as a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, making payments, and changing settings.
  • the fingerprint sensor 1014 may be disposed on the front, back or side of the terminal 1000 . When the terminal 1000 is provided with physical buttons or a manufacturer's logo, the fingerprint sensor 1014 may be integrated with the physical buttons or the manufacturer's logo.
  • the optical sensor 1015 is used to collect ambient light intensity.
  • the processor 1001 can control the display brightness of the display screen 1005 according to the ambient light intensity collected by the optical sensor 1015 . Specifically, when the ambient light intensity is high, the display brightness of the display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the display screen 1005 is decreased.
  • the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the ambient light intensity collected by the optical sensor 1015 .
  • a proximity sensor 1016 also called a distance sensor, is usually disposed on the front panel of the terminal 1000 .
  • the proximity sensor 1016 is used to collect the distance between the user and the front of the terminal 1000 .
  • the processor 1001 controls the display screen 1005 to switch from the bright screen state to the off screen state; when the proximity sensor 1016 detects When the distance between the user and the front of the terminal 1000 gradually increases, the processor 1001 controls the display screen 1005 to switch from the screen-off state to the screen-on state.
  • FIG. 10 does not constitute a limitation on the terminal 1000, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • the present application also provides a computer-readable storage medium, which stores at least one instruction, at least one piece of program, code set or instruction set, and the at least one instruction, at least one piece of program, code set or instruction set is loaded by a processor And execute the image capturing method provided by the above method embodiments.
  • the present application also provides a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the image capturing method provided in the foregoing optional implementation manner.
  • references herein to "a plurality” means two or more.
  • "And/or" which describes the association relationship of the associated objects, means that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • the character “/” generally indicates that the associated objects are an "or" relationship.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The present application relates to the technical field of image processing, and provides an image photographing method and device, a terminal and a storage medium. The method is applied to a terminal, and the terminal comprises a lens module provided under a screen glass cover plate, and the method comprises: acquiring surface type information of the screen glass cover plate; on the basis of the surface type information of the screen glass cover plate, performing image correction on the original image captured by the lens module; and outputting the corrected image obtained after the original image correction. According to the method provided by embodiments of the present application, on one hand, the imaging quality of image photographing can be improved by means of image correction, and on the other hand, when the hardware design of the terminal is not needed, due to the influence of the bending degree of the screen glass cover plate on the setting position of a camera assembly, the arrangement of the original components of the terminal is influenced, such that the implementation complexity of the terminal is reduced.

Description

图像拍摄方法、装置、终端及存储介质Image capturing method, device, terminal and storage medium
本申请要求于2021年04月30日提交的申请号为202110478495.9、发明名称为“图像拍摄方法、装置、终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110478495.9 and the invention title "image capturing method, device, terminal and storage medium" filed on April 30, 2021, the entire contents of which are incorporated into this application by reference .
技术领域technical field
本申请涉及图像处理技术领域,特别涉及一种图像拍摄方法、装置、终端及存储介质。The present application relates to the technical field of image processing, and in particular, to an image capturing method, device, terminal and storage medium.
背景技术Background technique
随着终端技术的发展,许多手机选择配置曲面屏这一类型的屏幕玻璃盖板。With the development of terminal technology, many mobile phones choose to configure a curved screen, a type of screen glass cover.
曲面屏的两个侧边或四个侧边带有曲面,这一种不平整的屏幕玻璃盖板容易对设置于曲面屏下的前置摄像头的图像拍摄造成影响。Two sides or four sides of the curved screen are curved, and this kind of uneven screen glass cover may easily affect the image capture of the front camera disposed under the curved screen.
相关技术中,通过将前置摄像头尽量放置于曲面屏中相对平整的区域的下方,以避免曲面屏对前置摄像头的图像拍摄的影响。In the related art, the front camera is placed below a relatively flat area in the curved screen as much as possible, so as to avoid the influence of the curved screen on the image capture of the front camera.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供了一种图像拍摄方法、装置、终端及存储介质,可以通过图像矫正提高图像拍摄的成像质量。所述技术方案如下。The embodiments of the present application provide an image capturing method, device, terminal and storage medium, which can improve the imaging quality of image capturing through image correction. The technical solution is as follows.
根据本申请的一方面,提供了一种图像拍摄方法,应用于终端中,所述终端包括设置于屏幕玻璃盖板下的镜头模组,所述方法包括:According to an aspect of the present application, an image capturing method is provided, which is applied to a terminal, where the terminal includes a lens module disposed under a screen glass cover, and the method includes:
获取所述屏幕玻璃盖板的面型信息;obtaining the surface shape information of the screen glass cover;
基于所述屏幕玻璃盖板的面型信息,对所述镜头模组拍摄所得的原始图像进行图像矫正;Perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover;
输出所述原始图像矫正后得到的矫正图像。A corrected image obtained after the original image is corrected is output.
根据本申请的一方面,提供了一种图像拍摄装置,所述装置包括:面型信息获取模块、图像矫正模块和图像输出模块;According to an aspect of the present application, an image capturing device is provided, the device comprising: a face shape information acquisition module, an image correction module, and an image output module;
所述面型信息获取模块,用于获取屏幕玻璃盖板的面型信息;The surface shape information acquisition module is used to obtain the surface shape information of the screen glass cover;
所述图像矫正模块,用于基于所述屏幕玻璃盖板的面型信息,对镜头模组拍摄所得的原始图像进行图像矫正;The image correction module is used to perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover;
所述图像输出模块,用于输出所述原始图像矫正后得到的矫正图像。The image output module is configured to output the corrected image obtained after the original image is corrected.
根据本申请的另一方面,提供了一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上方面所述的图像拍摄方法。According to another aspect of the present application, a computer device is provided, the computer device comprising: a processor and a memory, wherein the memory stores at least one instruction, at least a piece of program, a code set or an instruction set, the at least one The instructions, the at least one piece of program, the code set or the instruction set are loaded and executed by the processor to implement the image capturing method as described above.
根据本申请的另一方面,提供了一种计算机可读存储介质,所述存储介质 中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上方面所述的图像拍摄方法。According to another aspect of the present application, a computer-readable storage medium is provided, wherein the storage medium stores at least one instruction, at least one piece of program, code set or instruction set, the at least one instruction, the at least one piece of program , The code set or the instruction set is loaded and executed by the processor to implement the image capturing method described in the above aspect.
根据本申请的另一个方面,提供一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述可选实现方式中提供的图像拍摄方法。According to another aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the image capturing method provided in the foregoing optional implementation manner.
根据本申请实施例的另一方面,提供了一种芯片,所述芯片包括可编程逻辑电路和/或程序指令,当所述芯片运行时,用于实现如上述方面所述的图像拍摄方法。According to another aspect of the embodiments of the present application, a chip is provided, the chip includes a programmable logic circuit and/or program instructions, and when the chip runs, it is used to implement the image capturing method described in the above aspect.
本申请实施例提供的技术方案带来的有益效果至少包括:The beneficial effects brought by the technical solutions provided in the embodiments of the present application include at least:
通过获取到的屏幕玻璃盖板的面型信息,对屏幕玻璃盖板下的镜头模组拍摄的原始图像进行图像矫正,避免曲面屏对镜头模组的图像拍摄的影响,一方面可以提高图像拍摄的成像质量,另一方面,无需在进行终端的硬件设计时,由于考虑屏幕玻璃盖板的弯曲程度对摄像组件的设置位置的影响,影响终端中原有组件的排布,从而降低了终端的实现复杂度。Through the obtained surface shape information of the screen glass cover, image correction is performed on the original image captured by the lens module under the screen glass cover, so as to avoid the influence of the curved screen on the image capture of the lens module, and on the one hand, it can improve the image capture. On the other hand, when designing the hardware of the terminal, it is not necessary to consider the influence of the bending degree of the screen glass cover on the setting position of the camera components, which affects the arrangement of the original components in the terminal, thus reducing the realization of the terminal. the complexity.
附图说明Description of drawings
图1是本申请一个示例性实施例提供的图像拍摄方法的示意图;FIG. 1 is a schematic diagram of an image capturing method provided by an exemplary embodiment of the present application;
图2是本申请一个示例性实施例提供的图像拍摄方法的流程图;FIG. 2 is a flowchart of an image capturing method provided by an exemplary embodiment of the present application;
图3是本申请一个示例性实施例提供的存储屏幕玻璃盖板的面型信息的示意图;FIG. 3 is a schematic diagram of storing surface type information of a screen glass cover plate provided by an exemplary embodiment of the present application;
图4是本申请一个示例性实施例提供的图像拍摄方法的流程图;4 is a flowchart of an image capturing method provided by an exemplary embodiment of the present application;
图5是本申请一个示例性实施例提供的条纹光探测的示意图;FIG. 5 is a schematic diagram of fringe light detection provided by an exemplary embodiment of the present application;
图6是本申请一个示例性实施例提供的图像拍摄方法的流程图;6 is a flowchart of an image capturing method provided by an exemplary embodiment of the present application;
图7是本申请一个示例性实施例提供的图像拍摄方法的流程图;7 is a flowchart of an image capturing method provided by an exemplary embodiment of the present application;
图8是本申请一个示例性实施例提供的镜头模组的示意图;8 is a schematic diagram of a lens module provided by an exemplary embodiment of the present application;
图9是本申请一个示例性实施例提供的图像拍摄装置的框图;FIG. 9 is a block diagram of an image capturing apparatus provided by an exemplary embodiment of the present application;
图10是本申请一个示例性实施例提供的终端的结构示意图。FIG. 10 is a schematic structural diagram of a terminal provided by an exemplary embodiment of the present application.
具体实施方式Detailed ways
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present application clearer, the embodiments of the present application will be further described in detail below with reference to the accompanying drawings.
近年来,使用曲面屏成了手机行业的标配,手机的曲面屏除了侧曲外,更有四曲的趋势,不平整的屏幕玻璃盖板对其下方的前置摄像头的对焦拍摄造成了很大的影响,如成像偏远焦,画面不清晰等等。In recent years, the use of curved screens has become a standard in the mobile phone industry. In addition to side curvature, the curved screen of mobile phones has a four-curved trend. The uneven glass cover of the screen has caused great difficulty in focusing and shooting of the front camera below it. Large influence, such as imaging remote focus, the picture is not clear and so on.
为了避免曲面屏对前置摄像头的图像拍摄的影响,相关技术从硬件设计的角度来进行解决,如:将前置摄像头设计于曲面屏中相对平整的区域的下方。In order to avoid the influence of the curved screen on the image capture of the front camera, the related technology solves the problem from the perspective of hardware design, for example, the front camera is designed below a relatively flat area in the curved screen.
在本申请实施例中,通过获取屏幕玻璃盖板的面型信息,对屏幕玻璃盖板下的镜头模组拍摄的原始图像进行图像矫正,避免曲面屏对镜头模组的图像拍摄的影响。下面,结合参考如下图1,对本申请实施例提供的图像拍摄方法进行示例性的说明。In the embodiment of the present application, by obtaining the surface shape information of the screen glass cover, image correction is performed on the original image captured by the lens module under the screen glass cover, so as to avoid the influence of the curved screen on the image capturing of the lens module. Hereinafter, with reference to the following FIG. 1 , the image capturing method provided by the embodiment of the present application will be exemplarily described.
在本实施例中,镜头模组包括:条纹发射端、前置摄像头和图像传感器。前置摄像头用于进行前置图像拍摄;条纹发射端和图像传感器用于进行条纹光探测,具体的,条纹发射端用于发射探测条纹发射信号,图像传感器用于接收探测条纹反射信号,图像传感器可以是互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)接收器。In this embodiment, the lens module includes: a stripe emitting end, a front camera and an image sensor. The front camera is used for front image shooting; the stripe transmitter and the image sensor are used for stripe light detection. Specifically, the stripe transmitter is used to transmit the detection stripe emission signal, the image sensor is used to receive the detection stripe reflection signal, and the image sensor is used to receive the detection stripe reflection signal. It can be a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) receiver.
如图1所示,在终端使用镜头模组进行图像拍摄时,图像传感器可以获取探测条纹反射信号,并将探测条纹反射信号反馈给终端中的中央处理单元(Central Processing Unit,CPU)处理器。示例性的,在条纹发射端发射的探测条纹发射信号固定的情况下,图像传感器接收经被拍摄物体反射而形成的探测条纹反射信号,并将探测条纹反射信号发送给CPU处理器,CPU处理器基于探测条纹发射信号和探测条纹反射信号,可以判断出条纹状态变化,条纹状态变化包括:条纹宽度的变化、条纹间距的变化等。As shown in Figure 1, when the terminal uses the lens module to capture images, the image sensor can obtain the detection fringe reflection signal, and feed back the detection fringe reflection signal to the Central Processing Unit (CPU) processor in the terminal. Exemplarily, under the condition that the detection fringe emission signal emitted by the fringe emission end is fixed, the image sensor receives the detection fringe reflection signal formed by the reflection of the object to be photographed, and sends the detection fringe reflection signal to the CPU processor, and the CPU processor Based on the detection stripe emission signal and the detection stripe reflection signal, the stripe state change can be determined, and the stripe state change includes: the change of the stripe width, the change of the stripe spacing, and the like.
CPU处理器通过获取条纹发射端发射探测条纹发射信号与图像传感器接收探测条纹反射信号之间的时间差,计算得到被拍摄物体至前置摄像头的拍摄距离,并将拍摄距离反馈给前置摄像头。前置摄像头基于CPU处理器反馈的拍摄距离进行调焦,在调焦后进行图片拍摄,并将拍摄得到的原始图像反馈给CPU处理器。The CPU processor calculates the shooting distance from the object to the front camera by obtaining the time difference between the detection stripe emission signal emitted by the stripe transmitter and the detection stripe reflection signal received by the image sensor, and feeds back the shooting distance to the front camera. The front camera adjusts the focus based on the shooting distance fed back by the CPU processor, takes a picture after focusing, and feeds back the original image captured to the CPU processor.
CPU处理器通过条纹状态变化,计算得到拍摄的原始图像的深度信息,基于深度信息对原始图像进行图像分割。示例性的,CPU处理器基于深度信息先将原始图像中的人物与背景分割,再基于深度信息将人物分割成不同区域,分割的精度可以高于1mm。The CPU processor calculates the depth information of the captured original image through the change of the stripe state, and performs image segmentation on the original image based on the depth information. Exemplarily, the CPU processor first divides the person and the background in the original image based on the depth information, and then divides the person into different regions based on the depth information, and the precision of the segmentation may be higher than 1 mm.
图像分割完成后,CPU处理器调用存储在前置摄像头的电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)中的屏幕玻璃盖板的面型信息,基于屏幕玻璃盖板的面型信息对分割后的每个区域进行去卷积处理,从而在每个区域中都消除屏幕玻璃盖板的曲率对图像的影响,在去卷积处理完成后,CPU处理器进行图像合成,从而获得一张完整的图像矫正后的图片,对矫正后的图片进行图像输出。After the image segmentation is completed, the CPU processor calls the surface type information of the screen glass cover stored in the Electrically Erasable Programmable Read-Only Memory (EEPROM) of the front camera, based on the screen glass cover The surface shape information of the board performs deconvolution processing on each segmented area, so that the influence of the curvature of the screen glass cover on the image is eliminated in each area. After the deconvolution processing is completed, the CPU processor processes the image. Synthesis, so as to obtain a complete image corrected image, and image output of the corrected image.
下面,对本申请实施例提供的图像拍摄方法进行进一步的说明。Next, the image capturing method provided by the embodiment of the present application will be further described.
图2示出了本申请一个示例性实施例提供的图像拍摄方法的流程图,该方法可以应用于终端中,终端包括设置于屏幕玻璃盖板下的镜头模组,该方法包 括:Fig. 2 shows the flow chart of the image capturing method provided by an exemplary embodiment of the present application, and the method can be applied in a terminal, and the terminal includes a lens module arranged under a screen glass cover, and the method includes:
步骤201,获取屏幕玻璃盖板的面型信息。In step 201, the surface shape information of the screen glass cover is obtained.
在一种可能的实现方式中,终端获取预先存储好的镜头模组前的屏幕玻璃盖板的面型信息,或者,通过实时检测的方式,获取屏幕玻璃盖板的面型信息。In a possible implementation manner, the terminal obtains the pre-stored surface shape information of the screen glass cover in front of the lens module, or obtains the surface shape information of the screen glass cover by real-time detection.
示例性的,在终端中的镜头模组处于拍摄状态的情况下,终端进行实时检测,获取屏幕玻璃盖板的面型信息。镜头模组处于拍摄状态指的是镜头模组正在进行取景,尚未完成拍摄的状态。示例性的,用户进入相机功能页面,点击摄像头转换的图标,将终端切换为使用前置摄像头,终端使用前置摄像头进行取景,获取当前的屏幕玻璃盖板的面型信息。Exemplarily, when the lens module in the terminal is in a shooting state, the terminal performs real-time detection to obtain the surface shape information of the screen glass cover. The lens module is in the shooting state refers to the state that the lens module is in the process of framing and has not yet completed the shooting. Exemplarily, the user enters the camera function page, clicks the icon of camera conversion, switches the terminal to use the front camera, the terminal uses the front camera to view the scene, and obtains the current face shape information of the screen glass cover.
示例性的,终端从存储器(如EEPROM)中读取预先存储好的屏幕玻璃盖板的面型信息。其中,存储器中的屏幕玻璃盖板的面型信息可以是终端在组装完成之前进行预先存储的,也可以是在终端组装完成之后,对屏幕玻璃盖板进行检测后进行存储的。Exemplarily, the terminal reads the pre-stored surface shape information of the screen glass cover from a memory (eg, EEPROM). The surface shape information of the screen glass cover in the memory may be pre-stored before the terminal is assembled, or may be stored after the screen glass cover is detected after the terminal is assembled.
在本申请实施例中,镜头模组设置于屏幕玻璃盖板下,镜头模组透过屏幕玻璃盖板,接收来自屏幕玻璃盖板前的区域的光线,对该区域的画面进行光学成像。也即,本申请实施例中的镜头模组并非是如下形式的镜头模组:采用升降等方式的设计,无需透过屏幕玻璃盖板即可接收光线。In the embodiment of the present application, the lens module is disposed under the screen glass cover, and the lens module passes through the screen glass cover to receive light from the area in front of the screen glass cover, and performs optical imaging on the image in the area. That is, the lens module in the embodiment of the present application is not a lens module in the following form: a design such as lifting is adopted, and light can be received without passing through the screen glass cover.
示例性的,在屏幕玻璃盖板没有覆盖于终端相对的两个平面上,而是仅覆盖于终端的正面的情况下,镜头模组可以理解为是进行前置摄像的镜头模组。Exemplarily, in the case that the screen glass cover plate does not cover the two opposite planes of the terminal, but only covers the front side of the terminal, the lens module can be understood as a lens module for front-facing camera.
屏幕玻璃盖板的面型信息是用于指示屏幕玻璃盖板的面型情况的信息。在本申请实施例中,屏幕玻璃盖板的面型信息也可以理解为屏幕玻璃盖板的曲率信息、屏幕玻璃盖板对成像的影响信息等等。The surface shape information of the screen glass cover is information for indicating the surface shape of the screen glass cover. In the embodiments of the present application, the surface shape information of the screen glass cover can also be understood as the curvature information of the screen glass cover, the influence information of the screen glass cover on imaging, and the like.
步骤202,基于屏幕玻璃盖板的面型信息,对镜头模组拍摄所得的原始图像进行图像矫正。Step 202: Perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover.
在一种可能的实现方式中,终端使用镜头模组拍摄得到原始图像,调用获取得到的屏幕玻璃盖板的面型信息,对原始图像进行图像矫正。In a possible implementation manner, the terminal uses the lens module to capture the original image, and invokes the obtained surface shape information of the screen glass cover to perform image correction on the original image.
可选的,若终端设备通过实时检测的方式,获取屏幕玻璃盖板的面型信息,终端在获取屏幕玻璃盖板的面型信息后,将屏幕玻璃盖板的面型信息存储至镜头模组对应的存储器,如EEPROM,在拍摄得到原始图像后,从镜头模组对应的存储器中调用屏幕玻璃盖板的面型信息。Optionally, if the terminal device obtains the surface shape information of the screen glass cover through real-time detection, the terminal stores the surface shape information of the screen glass cover in the lens module after obtaining the surface shape information of the screen glass cover. The corresponding memory, such as EEPROM, calls the surface shape information of the screen glass cover from the memory corresponding to the lens module after capturing the original image.
示例性的,结合参考图3,镜头模组中包括图像传感器和前置摄像头,图像传感器将接收到的探测反射信号反馈给CPU处理器,CPU处理器基于图像传感器反馈的探测反射信号,计算得到屏幕玻璃盖板的面型信息,将屏幕玻璃盖板的面型信息传递给前置摄像头EEPROM,存储于前置摄像头EEPROM中。Exemplarily, referring to FIG. 3 , the lens module includes an image sensor and a front-facing camera, the image sensor feeds back the received detection reflection signal to the CPU processor, and the CPU processor calculates based on the detection reflection signal fed back by the image sensor. The surface shape information of the screen glass cover plate is transmitted to the front camera EEPROM and stored in the front camera EEPROM.
可选的,终端在通过一次检测而获取屏幕玻璃盖板的面型信息后,即可使用该屏幕玻璃盖板的面型信息对一个时间段内拍摄得到的原始图像进行图像矫正,也即,屏幕玻璃盖板的面型信息以固定的频率进行更新。示例性的,终端在获取一次屏幕玻璃盖板的面型信息后,对该屏幕玻璃盖板的面型信息进行存储,在接下来的一个月中,均使用该屏幕玻璃盖板的面型信息对拍摄得到的原 始图像进行图像矫正。Optionally, after acquiring the surface shape information of the screen glass cover through one detection, the terminal may use the surface shape information of the screen glass cover to perform image correction on the original image captured within a period of time, that is, The surface type information of the screen glass cover is updated at a fixed frequency. Exemplarily, after acquiring the surface shape information of the screen glass cover once, the terminal stores the surface shape information of the screen glass cover, and uses the surface shape information of the screen glass cover in the next month. Image rectification is performed on the captured original image.
可选的,屏幕玻璃盖板的面型信息在终端每次进行图像拍摄时进行自动更新。示例性的,终端使用镜头模组进行图像拍摄,在进行取景时获取屏幕玻璃盖板的面型信息,取景完成后进行图像拍摄,得到原始图像,使用本次获取到的屏幕玻璃盖板的面型信息对原始图像进行矫正。示例性的,在本次图像拍摄前,终端的屏幕玻璃盖板被贴膜,屏幕玻璃盖板的光学结构(材料/厚度)发生了变化,则终端对贴膜后的屏幕玻璃盖板的曲率进行检测,得到最新的面型信息,应用于实际拍照中的图像矫正。Optionally, the surface shape information of the screen glass cover is automatically updated every time the terminal performs image capture. Exemplarily, the terminal uses the lens module to capture images, obtains the surface shape information of the screen glass cover during framing, captures the image after the framing is completed, and obtains the original image, and uses the surface of the screen glass cover obtained this time. The original image is corrected using the type information. Exemplarily, before the image is taken, the screen glass cover of the terminal is filmed, and the optical structure (material/thickness) of the screen glass cover changes, then the terminal detects the curvature of the screen glass cover after filming. , get the latest face shape information and apply it to image correction in actual photography.
步骤203,输出原始图像矫正后得到的矫正图像。 Step 203 , output the corrected image obtained after the original image is corrected.
在一种可能的实现方式中,终端将原始图像矫正为矫正图像,输出矫正图像。In a possible implementation manner, the terminal rectifies the original image into a rectified image, and outputs the rectified image.
综上所述,本实施例提供的方法,通过获取到的屏幕玻璃盖板的面型信息,对屏幕玻璃盖板下的镜头模组拍摄的原始图像进行图像矫正,避免曲面屏对镜头模组的图像拍摄的影响,一方面可以提高图像拍摄的成像质量,另一方面,无需在进行终端的硬件设计时,由于考虑屏幕玻璃盖板的弯曲程度对摄像组件的设置位置的影响,影响终端中原有组件的排布,从而降低了终端的实现复杂度。In summary, the method provided in this embodiment performs image correction on the original image captured by the lens module under the screen glass cover by obtaining the surface shape information of the screen glass cover, so as to avoid the lens module being affected by the curved screen. On the one hand, it can improve the image quality of image shooting. On the other hand, when designing the hardware of the terminal, it is not necessary to consider the influence of the bending degree of the screen glass cover on the setting position of the camera assembly, which will affect the central plain of the terminal. There are components arranged, thereby reducing the implementation complexity of the terminal.
在示意性实施例中,终端通过基于原始图像的深度信息对原始图像进行图像矫正,以获取更好的图像矫正效果。In an exemplary embodiment, the terminal performs image correction on the original image based on the depth information of the original image, so as to obtain a better image correction effect.
图4示出了本申请一个示例性实施例提供的图像拍摄方法的流程图,该方法可以应用于终端中,终端包括设置于屏幕玻璃盖板下的镜头模组,该方法包括:4 shows a flowchart of an image capturing method provided by an exemplary embodiment of the present application. The method can be applied to a terminal. The terminal includes a lens module disposed under a screen glass cover, and the method includes:
步骤401,获取屏幕玻璃盖板的面型信息。In step 401, the surface shape information of the screen glass cover is obtained.
本步骤的实施方式参见上述步骤201,在此不进行赘述。For the implementation manner of this step, refer to the above-mentioned step 201, which will not be repeated here.
步骤402,获取原始图像的深度信息。 Step 402, acquiring depth information of the original image.
原始图像的深度信息指的是原始图像中的被拍摄物体在三维世界中的三维信息。The depth information of the original image refers to the three-dimensional information of the photographed object in the original image in the three-dimensional world.
可选的,终端中的镜头模组利用机器视觉技术,获取原始图像的深度信息,机器视觉技术可以包括:结构光(Structured-light)探测技术和光飞行时间(Time of Flight,ToF)法,本申请实施例对此不进行限制。Optionally, the lens module in the terminal uses machine vision technology to obtain depth information of the original image. The machine vision technology may include: structured light (Structured-light) detection technology and time of flight (Time of Flight, ToF) method, this The application embodiments do not limit this.
其中,光飞行时间法指的是通过测量光照射到被拍摄物体并返回的时间,从而获取被拍摄物体的深度信息的一种探测方式。The time-of-flight method of light refers to a detection method that obtains the depth information of the photographed object by measuring the time when the light irradiates the photographed object and returns.
其中,结构光探测技术指的是通过投影特定的编码图案到被拍摄物体上,把深度信息转换成编码图案的变化,通过检测编码图案的变化从而获取深度信息的一种探测方式。可选的,结构光探测技术包括:条纹光探测和散斑图探测。条纹光探测中所投影的编码图案为条纹图案,散斑图探测中所投影的编码图案为斑点图案。在一种可能的实现方式中,终端通过如下条纹光探测方式,获取原始图像的深度信息:Among them, the structured light detection technology refers to a detection method in which depth information is obtained by projecting a specific coding pattern onto the object to be photographed, converting the depth information into the change of the coding pattern, and detecting the change of the coding pattern. Optionally, the structured light detection technology includes: fringe light detection and speckle pattern detection. The coding pattern projected in the fringe light detection is a fringe pattern, and the coding pattern projected in the speckle pattern detection is a speckle pattern. In a possible implementation, the terminal obtains the depth information of the original image through the following stripe light detection methods:
S11,镜头模组发送第一探测条纹发射信号。S11, the lens module sends a first detection stripe emission signal.
可选的,镜头模组包括条纹发射端,终端调用条纹发射端发射第一探测条纹发射信号。Optionally, the lens module includes a stripe transmitting end, and the terminal calls the stripe transmitting end to transmit the first detection stripe transmitting signal.
S12,镜头模组接收第一探测条纹反射信号,第一探测条纹反射信号是第一探测条纹发射信号经被拍摄物体反射而形成的信号。S12, the lens module receives a first detection fringe reflection signal, where the first detection fringe reflection signal is a signal formed by the reflection of the first detection fringe emission signal by the object to be photographed.
可选的,镜头模组包括图像传感器,第一探测条纹发射信号透过屏幕玻璃盖板,投射到被拍摄物体的表面,经被拍摄物体反射形成第一探测条纹反射信号,第一探测条纹发射信号被图像传感器接收。Optionally, the lens module includes an image sensor, and the first detection fringe emission signal transmits through the screen glass cover and is projected onto the surface of the object to be photographed, and is reflected by the object to form a first detection fringe reflection signal, and the first detection fringe emits a signal. The signal is received by the image sensor.
S13,基于第一探测条纹反射信号与第一探测条纹发射信号之间的变化情况,获取原始图像的深度信息。S13: Obtain depth information of the original image based on the change between the reflection signal of the first detection fringe and the emission signal of the first detection fringe.
在一种可能的实现方式中,第一探测条纹反射信号与第一探测条纹发射信号之间条纹的间距、宽度等发生变化,终端基于上述变化,计算得到原始图像的深度信息。In a possible implementation manner, the spacing and width of the stripes between the reflected signal of the first detection stripe and the emission signal of the first detection stripe change, and the terminal calculates and obtains the depth information of the original image based on the above changes.
示例性的,结合参考图5,将条纹图501(即第一探测条纹发射信号)投射到物体502上,可以发现条纹图501中原本竖直的条纹图案发生了扭曲,条纹的间距、宽度等发生变化,条纹图501被物体502的高度调制,条纹图503(即第一探测条纹反射信号)扭曲的条纹形状就蕴藏了物体502的深度信息。Exemplarily, referring to FIG. 5, the fringe pattern 501 (ie, the first detection fringe emission signal) is projected onto the object 502, and it can be found that the original vertical fringe pattern in the fringe pattern 501 is distorted, and the spacing and width of the fringes, etc. When the change occurs, the fringe pattern 501 is modulated by the height of the object 502 , and the distorted fringe shape of the fringe pattern 503 (ie, the reflection signal of the first detection fringe) contains the depth information of the object 502 .
可选的,终端基于第一探测条纹反射信号与第一探测条纹发射信号之间的变化情况,获取原始图像的深度信息的方式为:终端基于第一探测条纹反射信号与第一探测条纹发射信号之间的变化情况进行拟合计算,从而得到原始图像的深度信息。可选的,拟合计算采用的拟合方式可为高斯多项式拟合或其他拟合方案,本申请实施例对此不进行限制。Optionally, the terminal obtains the depth information of the original image based on the change between the first detection fringe reflection signal and the first detection fringe emission signal: the terminal obtains the depth information of the original image based on the first detection fringe reflection signal and the first detection fringe emission signal. The changes between them are fitted and calculated to obtain the depth information of the original image. Optionally, the fitting method used in the fitting calculation may be Gaussian polynomial fitting or other fitting scheme, which is not limited in this embodiment of the present application.
步骤403,基于深度信息,将原始图像分割出至少一个图像区域。Step 403: Segment the original image into at least one image area based on the depth information.
可选的,终端基于图像的深度信息,将具有相似的深度的像素划分为一个图像区域。Optionally, the terminal divides pixels with similar depths into an image area based on the depth information of the image.
在一种可能的实现方式中,终端基于深度信息将原始图像中的一部分分割出至少一个图像区域。在另一种可能的实现方式中,终端基于深度信息将整个原始图像分割出至少一个图像区域。In a possible implementation manner, the terminal divides a part of the original image into at least one image area based on the depth information. In another possible implementation manner, the terminal divides the entire original image into at least one image area based on the depth information.
示例性的,原始图像中包括人物和背景,终端先基于深度信息将人物和背景进行分割,再基于深度信息进一步将人物和背景分割成更小的图像区域。Exemplarily, the original image includes a person and a background. The terminal first divides the person and the background based on the depth information, and then further divides the person and the background into smaller image areas based on the depth information.
示例性的,原始图像中包括人物和背景,终端先基于深度信息将人物和背景进行分割,再基于深度信息将原始图像中的人物部分分割出至少一个图像区域。Exemplarily, the original image includes a person and a background, the terminal first divides the person and the background based on the depth information, and then divides the person part in the original image into at least one image area based on the depth information.
步骤404,基于屏幕玻璃盖板的面型信息,对至少一个图像区域进行图像矫正。Step 404: Perform image correction on at least one image area based on the surface shape information of the screen glass cover.
在一种可能的实现方式中,终端使用屏幕玻璃盖板的面型信息,对至少一个图像区域进行去卷积处理以进行图像矫正。In a possible implementation manner, the terminal uses the surface shape information of the screen glass cover to perform deconvolution processing on at least one image area to perform image correction.
镜头模组输出的原始图像可以视为是真实图像与屏幕玻璃盖板面型进行卷积而得到的结果。因此,使用屏幕玻璃盖板的面型信息,对至少一个图像区域 对应的原始图像进行去卷积处理,可以得到该图像区域对应的真实图像。The original image output by the lens module can be regarded as the result of convolution of the real image and the surface of the screen glass cover. Therefore, deconvolution processing is performed on the original image corresponding to at least one image area by using the surface shape information of the screen glass cover, and the real image corresponding to the image area can be obtained.
可选的,去卷积处理是基于交叉通道先验的去卷积处理。交叉通道先验指的是在去卷积过程中共享不同通道的信息,从而一个通道保留的频率信息可以帮助其他通道重建,消除色差。通过交叉通道先验加入色差矫正,减弱在图像矫正过程中色差引起的模糊和彩边现象,从而实现高像质成像。Optionally, the deconvolution process is based on a cross-channel prior. Cross-channel prior refers to sharing the information of different channels during the deconvolution process, so that the frequency information retained by one channel can help other channels to reconstruct and eliminate chromatic aberration. The chromatic aberration correction is added to the cross-channel prior to reduce the blurring and color fringing caused by the chromatic aberration during the image correction process, so as to achieve high-quality imaging.
步骤405,将原始图像与图像矫正后的至少一个图像区域进行合成,得到矫正图像。 Step 405, synthesizing the original image and at least one image area after image correction to obtain a corrected image.
在一种可能的实现方式中,终端在对至少一个图像区域分别进行图像矫正后,将原始图像中未进行图像矫正的部分与图像矫正后的至少一个图像区域执行图像合成,从而得到矫正图像。In a possible implementation manner, after performing image correction on at least one image area respectively, the terminal performs image synthesis on the uncorrected part of the original image and at least one image area after image correction, thereby obtaining a corrected image.
示例性的,终端将原始图像中的人物部分分割出一个图像区域,对人物部分对应的图像区域进行图像矫正,再将矫正完成后的人物部分与原始图像中的其他部分进行合成,得到矫正图像。Exemplarily, the terminal divides the character part in the original image into an image area, performs image correction on the image area corresponding to the character part, and then combines the corrected character part with other parts in the original image to obtain a corrected image. .
可以理解的是,若整个原始图像被分割成至少一个图像区域,则终端将图像矫正后的至少一个图像区域进行合成,得到矫正图像。It can be understood that, if the entire original image is divided into at least one image area, the terminal synthesizes the at least one image area after image correction to obtain a corrected image.
步骤406,输出矫正图像。 Step 406, output the corrected image.
综上所述,本实施例提供的方法,终端获取原始图像的深度信息,基于原始图像的深度信息将原始图像划分成多个图像区域后,对各个图像区域分别进行矫正,相较于统一对原始图像进行矫正的矫正方式,由于计算的粒度更小,可以提升图像矫正的精确性。To sum up, in the method provided in this embodiment, the terminal obtains the depth information of the original image, divides the original image into multiple image areas based on the depth information of the original image, and then corrects each image area separately. The correction method for correcting the original image can improve the accuracy of image correction due to the smaller granularity of the calculation.
在示意性实施例中,屏幕玻璃盖板的面型信息是终端基于机器视觉技术进行探测得到的。机器视觉技术可以包括:结构光探测技术和光飞行时间法,本申请实施例对此不进行限制。可选的,结构光探测技术包括:条纹光探测和散斑图探测。下面,以终端通过条纹光探测方式,获取屏幕玻璃盖板的面型信息进行示例性的说明。In an exemplary embodiment, the surface shape information of the glass cover plate of the screen is acquired by the terminal based on the detection of the machine vision technology. The machine vision technology may include: structured light detection technology and light time-of-flight method, which are not limited in this embodiment of the present application. Optionally, the structured light detection technology includes: fringe light detection and speckle pattern detection. In the following, an exemplary description will be given of obtaining the surface shape information of the screen glass cover plate by means of stripe light detection by the terminal.
图6示出了本申请一个示例性实施例提供的图像拍摄方法的流程图,该方法可以应用于终端中,终端包括设置于屏幕玻璃盖板下的镜头模组,该方法包括:FIG. 6 shows a flowchart of an image capturing method provided by an exemplary embodiment of the present application. The method can be applied to a terminal. The terminal includes a lens module disposed under a screen glass cover, and the method includes:
步骤601,调用镜头模组进行条纹光探测,获取屏幕玻璃盖板的面型信息。In step 601, the lens module is called to perform stripe light detection, and the surface shape information of the screen glass cover is obtained.
条纹光探测指的是通过带有确定条纹信息的探测条纹信号进行反射测试的探测方式。确定条纹信息指的是探测条纹信号所对应的条纹图案中,条纹的间距、宽度等是固定的。The fringe light detection refers to the detection method in which the reflection test is carried out through the detection fringe signal with the determined fringe information. Determining the fringe information means that in the fringe pattern corresponding to the detection fringe signal, the spacing and width of the fringes are fixed.
在一种可能的实现方式中,终端通过如下方式进行条纹光探测:In a possible implementation manner, the terminal performs fringe light detection in the following manner:
S21,镜头模组发送第二探测条纹发射信号。S21, the lens module sends a second detection stripe emission signal.
可选的,镜头模组包括条纹发射端,终端调用条纹发射端发射第二探测条纹发射信号。Optionally, the lens module includes a stripe transmitting end, and the terminal invokes the stripe transmitting end to transmit the second detection stripe transmitting signal.
在本申请实施例中,第二探测条纹发射信号与上文实施例所述第一探测条纹发射信号可以是条纹发射端在同一个时间点发射的探测条纹发射信号的两个 不同部分:第一部分即第一探测条纹发射信号,该部分信号透过屏幕玻璃盖板,投射到被拍摄物体的表面;第二部分即第二探测条纹发射信号,该部分信号未透过屏幕玻璃盖板。In this embodiment of the present application, the second detection fringe emission signal and the first detection fringe emission signal described in the above embodiments may be two different parts of the detection fringe emission signal emitted by the fringe emission end at the same time point: the first part That is, the first detection stripe emits signal, and this part of the signal passes through the screen glass cover and is projected onto the surface of the object to be photographed; the second part is the second detection stripe emission signal, and this part of the signal does not pass through the screen glass cover.
S22,镜头模组接收第二探测条纹反射信号,第二探测条纹反射信号是第二探测条纹发射信号经屏幕玻璃盖板反射而形成的信号。S22, the lens module receives a second detection stripe reflection signal, where the second detection stripe reflection signal is a signal formed by the second detection stripe emission signal reflected by the screen glass cover.
可选的,镜头模组包括图像传感器,第二探测条纹发射信号被屏幕玻璃盖板反射形成第二探测条纹反射信号,第二探测条纹反射信号被图像传感器接收。Optionally, the lens module includes an image sensor, the second detection stripe emission signal is reflected by the screen glass cover to form a second detection stripe reflection signal, and the second detection stripe reflection signal is received by the image sensor.
S23,基于第二探测条纹反射信号与第二探测条纹发射信号之间的变化情况,获取屏幕玻璃盖板的面型信息。S23 , based on the change between the reflection signal of the second detection stripe and the emission signal of the second detection stripe, obtain the surface shape information of the glass cover plate of the screen.
在一种可能的实现方式中,第二探测条纹反射信号与第二探测条纹发射信号之间条纹的间距、宽度等发生变化,终端基于上述变化,计算得到屏幕玻璃盖板的面型信息。In a possible implementation manner, the spacing and width of the stripes between the reflection signal of the second detection stripe and the emission signal of the second detection stripe change, and the terminal calculates the surface shape information of the screen glass cover based on the above changes.
可选的,终端基于第二探测条纹反射信号与第二探测条纹发射信号之间的变化情况,获取屏幕玻璃盖板的面型信息的方式为:终端基于第二探测条纹反射信号与第二探测条纹发射信号之间的变化情况进行拟合计算,从而得到屏幕玻璃盖板的面型信息。可选的,拟合计算采用的拟合方式可为高斯多项式拟合或其他拟合方案,本申请实施例对此不进行限制。Optionally, the terminal acquires the surface shape information of the screen glass cover based on the change between the second detection stripe reflection signal and the second detection stripe emission signal as follows: the terminal is based on the second detection stripe reflection signal and the second detection stripe reflection signal. The variation between the fringe emission signals is fitted and calculated to obtain the surface shape information of the screen glass cover. Optionally, the fitting method used in the fitting calculation may be Gaussian polynomial fitting or other fitting scheme, which is not limited in this embodiment of the present application.
步骤602,基于屏幕玻璃盖板的面型信息,对镜头模组拍摄所得的原始图像进行图像矫正。Step 602: Perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover.
本步骤的实施方式参见上述步骤202,在此不进行赘述。For the implementation manner of this step, refer to the above-mentioned step 202, which will not be repeated here.
步骤603,输出原始图像矫正后得到的矫正图像。 Step 603, output the corrected image obtained after the original image is corrected.
本步骤的实施方式参见上述步骤203,在此不进行赘述。For the implementation of this step, refer to the above-mentioned step 203, which will not be repeated here.
综上所述,本实施例提供的方法,终端通过进行条纹光探测,通过探测条纹反射信号与探测条纹发射信号之间的变化情况,从而获取准确的屏幕玻璃盖板的面型信息,在使用屏幕玻璃盖板的面型信息进行图像矫正的情况下,有利于提升图像矫正的效果。To sum up, in the method provided in this embodiment, the terminal obtains accurate surface shape information of the screen glass cover by detecting the stripe light detection and detecting the change between the stripe reflection signal and the detection stripe emission signal. When the face shape information of the screen glass cover is used for image correction, it is beneficial to improve the effect of image correction.
在示意性实施例中,终端在进行图像拍摄之前,对镜头模组进行调焦,从而提升图像的成像质量。In an exemplary embodiment, the terminal adjusts the focus of the lens module before capturing the image, thereby improving the imaging quality of the image.
图7示出了本申请一个示例性实施例提供的图像拍摄方法的流程图,该方法可以应用于终端中,终端包括设置于屏幕玻璃盖板下的镜头模组,该方法包括:7 shows a flowchart of an image capturing method provided by an exemplary embodiment of the present application. The method can be applied to a terminal. The terminal includes a lens module disposed under a screen glass cover, and the method includes:
步骤701,获取屏幕玻璃盖板的面型信息。Step 701: Obtain the surface shape information of the screen glass cover.
本步骤的实施方式参见上述步骤201,在此不进行赘述。For the implementation manner of this step, refer to the above-mentioned step 201, which will not be repeated here.
步骤702,获取参考距离,参考距离是被拍摄物体至镜头模组之间的距离。Step 702: Obtain a reference distance, where the reference distance is the distance between the object to be photographed and the lens module.
在镜头模组正在进行取景时,终端获取被拍摄的被拍摄物体至镜头模组之间的距离,基于该距离进行对焦。When the lens module is framing, the terminal obtains the distance between the photographed object and the lens module, and performs focusing based on the distance.
可选的,参考距离是终端基于机器视觉技术进行探测得到的。机器视觉技术可以包括:结构光探测技术和光飞行时间法,本申请实施例对此不进行限制。 可选的,结构光探测技术包括:条纹光探测和散斑图探测。Optionally, the reference distance is detected by the terminal based on machine vision technology. The machine vision technology may include: structured light detection technology and light time-of-flight method, which are not limited in this embodiment of the present application. Optionally, the structured light detection technology includes: fringe light detection and speckle pattern detection.
在一种可能的实现方式中,终端通过条纹光探测方式,获取参考距离:In a possible implementation, the terminal obtains the reference distance through stripe light detection:
S31,调用镜头模组发送第三探测条纹发射信号。S31, calling the lens module to send a third detection stripe emission signal.
可选的,镜头模组包括条纹发射端,终端调用条纹发射端发射第三探测条纹发射信号。Optionally, the lens module includes a stripe transmitting end, and the terminal calls the stripe transmitting end to transmit a third detection stripe transmitting signal.
在本申请实施例中,第三探测条纹发射信号与上文实施例所述第一探测条纹发射信号可以是同一个信号。In this embodiment of the present application, the third detection fringe emission signal and the first detection fringe emission signal described in the above embodiments may be the same signal.
S32,调用镜头模组接收第三探测条纹反射信号,第三探测条纹反射信号是第三探测条纹发射信号经被拍摄物体反射而形成的信号。S32, the lens module is called to receive the reflection signal of the third detection stripe, and the reflection signal of the third detection stripe is a signal formed by the emission signal of the third detection stripe reflected by the object to be photographed.
可选的,镜头模组包括图像传感器,第三探测条纹发射信号透过屏幕玻璃盖板,投射到被拍摄物体表面,经被拍摄物体反射形成第三探测条纹反射信号,第三探测条纹反射信号被图像传感器接收。Optionally, the lens module includes an image sensor, and the third detection stripe emission signal transmits through the screen glass cover and is projected onto the surface of the object to be photographed, and is reflected by the object to form a third detection stripe reflection signal, and the third detection stripe reflection signal. received by the image sensor.
在本申请实施例中,第三探测条纹反射信号与上文实施例所述第一探测条纹发射信号可以是同一个信号。In the embodiment of the present application, the reflection signal of the third detection fringe and the emission signal of the first detection fringe described in the above embodiments may be the same signal.
S33,基于第一往返时间,确定参考距离。S33, based on the first round-trip time, determine a reference distance.
其中,第一往返时间是镜头模组发射第三探测条纹发射信号的时间点与接收第三探测条纹反射信号的时间点之间的差值。The first round-trip time is the difference between the time point when the lens module transmits the emission signal of the third detection stripe and the time point when the reflection signal of the third detection stripe is received.
由于第三探测条纹发射信号由镜头模组发射,第三探测条纹反射信号是第三探测条纹发射信号经被拍摄物体反射而形成,且第三探测条纹反射信号同样由镜头模组接收,则基于发射和接收的时间差(即第一往返时间)和信号的传播速度,可以计算出被拍摄物体至镜头模组之间的距离。Since the emission signal of the third detection fringe is emitted by the lens module, the reflection signal of the third detection fringe is formed by the reflection of the emission signal of the third detection fringe by the object to be photographed, and the reflection signal of the third detection fringe is also received by the lens module. The time difference between transmission and reception (ie, the first round-trip time) and the propagation speed of the signal can be used to calculate the distance between the object to be photographed and the lens module.
可以理解的是,若第三探测条纹发射信号和第二探测条纹发射信号是终端在同一个时间点发射的一个信号的不同部分,则第一往返时间可以通过镜头模组接收第三探测条纹反射信号的时间点和接收第二探测条纹反射信号的时间点之间的差值来进行衡量,其中,第二探测条纹反射信号是第二探测条纹发射信号经屏幕玻璃盖板反射而形成的信号。这是由于屏幕玻璃盖板与镜头模组之间的距离很近,该距离对参考距离的数值的影响很小,则在终端同时发射第三探测条纹发射信号和第二探测条纹发射信号的情况下,终端可以将镜头模组接收第三探测条纹反射信号的时间点和接收第二探测条纹反射信号的时间点之间的差值等价于第一往返时间。It can be understood that, if the third detection fringe emission signal and the second detection fringe emission signal are different parts of a signal transmitted by the terminal at the same time point, the first round-trip time can receive the reflection of the third detection fringe through the lens module. It is measured by the difference between the time point of the signal and the time point when the second detection stripe reflection signal is received, wherein the second detection stripe reflection signal is a signal formed by the second detection stripe emission signal reflected by the screen glass cover. This is because the distance between the screen glass cover and the lens module is very close, and this distance has little effect on the value of the reference distance, so the terminal transmits the third detection stripe transmission signal and the second detection stripe transmission signal at the same time. In this case, the terminal may equate the difference between the time point when the lens module receives the reflection signal of the third detection fringe and the time point when the reflection signal of the second detection stripe is received as the first round-trip time.
步骤703,基于参考距离,对镜头模组进行调焦。 Step 703, focusing on the lens module based on the reference distance.
在一种可能的实现方式中,镜头模组采用变焦镜头,则终端在获取参考距离后,采用对焦马达将镜头模组中的镜片驱动到理想位置以完成对焦,其中,理想位置是镜头支持对当前参考距离下的被拍摄物体取得理想对焦效果时,镜片所在的位置。In a possible implementation manner, the lens module adopts a zoom lens, and after obtaining the reference distance, the terminal uses a focusing motor to drive the lens in the lens module to an ideal position to complete focusing, wherein the ideal position is when the lens supports The position of the lens when the object to be photographed at the current reference distance achieves the ideal focusing effect.
可以理解的是,若镜头模组采用定焦镜头,则终端在获取参考距离后,将参考距离与理想参考距离范围进行比对,若参考距离不在理想参考距离范围之内,则在终端上显示提示信息,提示信息用于提示用户调整被拍摄物体与终端之间的距离,其中,理想参考距离范围是在镜头模组取得理想对焦效果的情况 下,被拍摄物体至镜头模组之间的距离对应的距离范围。示例性的,若参考距离大于理想参考距离范围,则终端提示用户缩短被拍摄物体与终端之间的距离;若参考距离小于理想参考距离范围,则终端提示用户增加被拍摄物体与终端之间的距离。It can be understood that if the lens module adopts a fixed-focus lens, after obtaining the reference distance, the terminal will compare the reference distance with the ideal reference distance range. If the reference distance is not within the ideal reference distance range, it will be displayed on the terminal. Prompt information. The prompt information is used to prompt the user to adjust the distance between the object to be photographed and the terminal. The ideal reference distance range is the distance between the object to be photographed and the lens module when the lens module achieves an ideal focusing effect. corresponding distance range. Exemplarily, if the reference distance is greater than the ideal reference distance range, the terminal prompts the user to shorten the distance between the photographed object and the terminal; if the reference distance is less than the ideal reference distance range, the terminal prompts the user to increase the distance between the photographed object and the terminal. distance.
可选的,参考距离可以为一个值,也可以为多个值。参考距离为一个值时,参考距离是被拍摄物体的一个点与镜头模组之间的距离;参考距离为多个值时,参考距离是被拍摄物体的多个点分别与镜头模组之间的距离。Optionally, the reference distance can be one value or multiple values. When the reference distance is one value, the reference distance is the distance between a point of the object to be photographed and the lens module; when the reference distance is multiple values, the reference distance is the distance between multiple points of the object to be photographed and the lens module. the distance.
可选的,响应于得到至少两个参考距离,终端基于注意力机制对至少两个参考距离进行处理,得到目标参考距离;基于目标参考距离,对镜头模组进行调焦。Optionally, in response to obtaining the at least two reference distances, the terminal processes the at least two reference distances based on the attention mechanism to obtain the target reference distance; and adjusts the focus of the lens module based on the target reference distance.
基于注意力机制对至少两个参考距离进行处理指的是为至少两个参考距离分配不同的权重,再进行加权的处理方式。Processing at least two reference distances based on the attention mechanism refers to a processing method of assigning different weights to at least two reference distances, and then performing weighting.
可选的,终端基于参考距离对应的被拍摄物体的点在图像中的位置,为不同参考距离分配权重。如:第一参考距离对应的点在图像的中部位置,第二参考距离对应的点在图像的边缘位置,则第一参考距离的权重高于第二参考距离的权重。Optionally, the terminal assigns weights to different reference distances based on the positions of the points of the photographed objects corresponding to the reference distances in the image. For example, if the point corresponding to the first reference distance is in the middle of the image, and the point corresponding to the second reference distance is at the edge of the image, the weight of the first reference distance is higher than that of the second reference distance.
可选的,终端基于参考距离对应的被拍摄物体的点的性质,为不同参考距离分配权重。如:第一参考距离对应的点属于人物,第二参考距离对应的点属于背景,则第一参考距离的权重高于第二参考距离的权重。Optionally, the terminal assigns weights to different reference distances based on properties of points of the object to be photographed corresponding to the reference distances. For example, if the point corresponding to the first reference distance belongs to a person, and the point corresponding to the second reference distance belongs to the background, the weight of the first reference distance is higher than that of the second reference distance.
可选的,终端基于参考距离对应的被拍摄物体是否被用户所选中,为不同参考距离分配权重。如:第一参考距离对应的被拍摄物体被用户所选中,第二参考距离对应的被拍摄物体未被用户所选中,则第一参考距离的权重高于第二参考距离的权重。Optionally, the terminal assigns weights to different reference distances based on whether the object to be photographed corresponding to the reference distance is selected by the user. For example, if the object to be photographed corresponding to the first reference distance is selected by the user, and the object to be photographed corresponding to the second reference distance is not selected by the user, the weight of the first reference distance is higher than that of the second reference distance.
步骤704,使用调焦后的镜头模组进行图像拍摄,得到原始图像。 Step 704, using the adjusted lens module to capture an image to obtain an original image.
在一种可能的实现方式中,终端基于参考距离进行调焦后,使用镜头模组进行图像拍摄,此时拍摄得到的原始图像对应有较好的对焦效果。In a possible implementation manner, after the terminal adjusts the focus based on the reference distance, the lens module is used to capture the image, and the original image captured at this time corresponds to a better focusing effect.
步骤705,基于屏幕玻璃盖板的面型信息,对镜头模组拍摄所得的原始图像进行图像矫正。Step 705: Perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover.
本步骤的实施方式参见上述步骤202,在此不进行赘述。For the implementation manner of this step, refer to the above-mentioned step 202, which will not be repeated here.
步骤706,输出原始图像矫正后得到的矫正图像。 Step 706 , output the corrected image obtained after the original image is corrected.
本步骤的实施方式参见上述步骤203,在此不进行赘述。For the implementation of this step, refer to the above-mentioned step 203, which will not be repeated here.
综上所述,本实施例提供的方法,终端在进行图像拍摄之前,获取被拍摄物体与镜头模组之间的参考距离,使用参考距离对镜头模组进行调焦,提供一种辅助对焦的实现方式,从而提升图像的成像质量。To sum up, in the method provided in this embodiment, the terminal obtains the reference distance between the object to be photographed and the lens module before capturing the image, and uses the reference distance to adjust the focus of the lens module, thereby providing an auxiliary focusing method. implementation, thereby improving the imaging quality of the image.
下面,对上述实施例的镜头模组进行示例性的说明。图8示出了本申请一个示例性实施例提供的镜头模组的示意图,镜头模组包括:摄像头801、探测信号发射端802和图像传感器803。Hereinafter, the lens module of the above-mentioned embodiment will be exemplarily described. FIG. 8 shows a schematic diagram of a lens module provided by an exemplary embodiment of the present application. The lens module includes a camera 801 , a detection signal transmitting end 802 and an image sensor 803 .
如图8所示,探测信号发射端802和图像传感器803以摄像头801的光轴 为对称轴而对称设置。可选的,探测信号发射端802的光轴和图像传感器803感光面的中心轴,以摄像头801的光轴为对称轴而对称设置。As shown in Fig. 8, the detection signal transmitting end 802 and the image sensor 803 are symmetrically arranged with the optical axis of the camera 801 as the axis of symmetry. Optionally, the optical axis of the detection signal emitting end 802 and the central axis of the photosensitive surface of the image sensor 803 are symmetrically arranged with the optical axis of the camera 801 as the axis of symmetry.
可选的,探测信号发射端802和图像传感器803可以独立设置,两个器件之间没有进行固定连接;探测信号发射端802和图像传感器803也可以统一设置,两个器件之间进行固定连接,如:两个器件构成U型结构,环绕于摄像头801两侧。Optionally, the detection signal transmitter 802 and the image sensor 803 can be set independently, and there is no fixed connection between the two devices; the detection signal transmitter 802 and the image sensor 803 can also be set in a unified manner, and the two devices are fixedly connected. For example, the two devices form a U-shaped structure and surround both sides of the camera 801 .
摄像头801用于进行图像拍摄。可选的,摄像头801为前置摄像头。The camera 801 is used for image capturing. Optionally, the camera 801 is a front camera.
探测信号发射端802用于发射探测发射信号。可选的,探测信号发射端802用于发射探测条纹发射信号,探测信号发射端802为一种条纹发射端。可选的,探测信号发射端802用于发射探测斑点发射信号,探测信号发射端802为一种散斑图发射端。The sounding signal transmitting end 802 is used for transmitting sounding and transmitting signals. Optionally, the detection signal transmitter 802 is used for transmitting detection stripe transmission signals, and the detection signal transmitter 802 is a kind of stripe transmitter. Optionally, the detection signal transmitter 802 is configured to transmit a detection speckle transmitter signal, and the detection signal transmitter 802 is a speckle pattern transmitter.
可选的,探测信号发射端802为液晶显示器(Liquid Crystal Display,LCD)屏幕。可选的,探测信号发射端802发射的探测信号的波长在非可见光范围内,如:属于红外光。Optionally, the detection signal transmitting end 802 is a liquid crystal display (Liquid Crystal Display, LCD) screen. Optionally, the wavelength of the detection signal emitted by the detection signal transmitting end 802 is in the non-visible light range, such as infrared light.
图像传感器803用于接收探测反射信号,探测反射信号是探测发射信号经反射而形成的信号。可选的,图像传感器803接收到的探测反射信号是探测条纹反射信号。可选的,图像传感器803接收到的探测反射信号是探测斑点反射信号。可选的,图像传感器803的探测靶面像素不小于2000*2000,单个像素大小2um-4um。可选的,图像传感器803为面阵图像传感器,面阵图像传感器支持探测被拍摄物体的多个点分别至镜头模组之间的距离,即,面阵图像传感器支持获取多个参考距离。The image sensor 803 is used to receive the detection reflection signal, and the detection reflection signal is a signal formed by the reflection of the detection emission signal. Optionally, the detection reflection signal received by the image sensor 803 is a detection fringe reflection signal. Optionally, the detection reflection signal received by the image sensor 803 is a detection speckle reflection signal. Optionally, the pixels of the detection target surface of the image sensor 803 are not less than 2000*2000, and the size of a single pixel is 2um-4um. Optionally, the image sensor 803 is an area array image sensor, and the area array image sensor supports detecting the distances between multiple points of the photographed object and the lens module, that is, the area array image sensor supports obtaining multiple reference distances.
结合参考图8,以探测信号发射端802为条纹发射端为例,对镜头模组进行条纹光探测进行示例性的说明。Referring to FIG. 8 , taking the detection signal transmitting end 802 as the stripe transmitting end as an example, an exemplary description will be given of the stripe light detection performed by the lens module.
探测信号发射器802发射带有确定条纹信息的探测条纹发射信号。探测条纹发射信号对应的探测条纹反射信号被图像传感器803吸收。The probe signal transmitter 802 transmits a probe fringe transmission signal with the determined fringe information. The detection stripe reflection signal corresponding to the detection stripe emission signal is absorbed by the image sensor 803 .
一部分探测条纹发射信号(即第二探测条纹发射信号)经终端的屏幕玻璃盖板804反射,反射形成第二探测条纹反射信号,第二探测条纹反射信号被图像传感器803吸收,送至终端中的CPU处理器,由CPU处理器根据条纹变化计算出屏幕玻璃盖板804的面型信息,主要包括X和Y方向的曲率,从而构建出屏幕玻璃盖板804的三维模型。A part of the detection fringe emission signal (that is, the second detection fringe emission signal) is reflected by the screen glass cover 804 of the terminal to form a second detection fringe reflection signal. The second detection fringe reflection signal is absorbed by the image sensor 803 and sent to the terminal in the terminal. The CPU processor calculates the surface shape information of the screen glass cover 804 according to the stripe changes, mainly including the curvatures in the X and Y directions, thereby constructing a three-dimensional model of the screen glass cover 804 .
另一部分的探测条纹发射信号(即第一探测条纹发射信号)透过屏幕玻璃盖板804,透射到所要拍摄的被拍摄物体805表面,反射形成第一探测条纹反射信号,第一探测条纹反射信号被图像传感器803吸收,送至终端中的CPU处理器,同样是根据条纹的变化,由CPU处理器计算被拍摄物体805的深度信息。CPU处理器还可以根据图像传感器803接收到第一探测条纹反射信号与第二探测条纹反射信号之间的时间差,计算被拍摄物体805至镜头模组之间的参考距离。The other part of the detection fringe emission signal (ie the first detection fringe emission signal) passes through the screen glass cover 804, transmits to the surface of the object to be photographed 805, and is reflected to form the first detection fringe reflection signal, and the first detection fringe reflection signal Absorbed by the image sensor 803 and sent to the CPU processor in the terminal, the CPU processor also calculates the depth information of the photographed object 805 according to the change of the stripes. The CPU processor can also calculate the reference distance between the photographed object 805 and the lens module according to the time difference between the first detection fringe reflection signal and the second detection fringe reflection signal received by the image sensor 803 .
在本申请实施例中,屏幕玻璃盖板804是一种曲面屏幕玻璃盖板,该曲面屏幕玻璃盖板下设置有如上所述的镜头模组。相应的,终端可以是曲面屏终端。In the embodiment of the present application, the screen glass cover 804 is a curved screen glass cover, and the above-mentioned lens module is disposed under the curved screen glass cover. Correspondingly, the terminal may be a curved screen terminal.
可以理解的是,若终端中的屏幕玻璃盖板804是一种卷轴屏玻璃盖板、折叠屏玻璃盖板,本申请所示出的图像拍摄方法也可以同样适用于该类型下的终端中。It can be understood that, if the screen glass cover 804 in the terminal is a rolling screen glass cover or a folding screen glass cover, the image capturing method shown in this application can also be applied to the terminal of this type.
综上所述,本申请实施例所示的镜头模组,镜头模组通过对称设置的条纹发射端和图像传感器,可以进行条纹光探测,条纹光探测方案可以应用于手机前置,一是探测计算镜头模组之上的屏幕玻璃盖板的面型信息,通过算法补偿,矫正像差,对拍摄得到的原始图像进行矫正;二是探测计算被拍摄物体的距离和深度信息,实现更精确的面部识别和对焦。To sum up, in the lens module shown in the embodiment of the present application, the lens module can perform stripe light detection through symmetrically arranged stripe emitters and image sensors, and the stripe light detection scheme can be applied to the front of mobile phones. Calculate the surface information of the screen glass cover above the lens module, compensate and correct the aberration through the algorithm, and correct the original image obtained by shooting; the second is to detect and calculate the distance and depth information of the photographed object to achieve more accurate Facial recognition and focus.
可以理解的是,上述方法实施例可以单独实施,也可以组合实施,本申请实施例对此不进行限制。It can be understood that, the foregoing method embodiments may be implemented independently or in combination, which are not limited in the embodiments of the present application.
以下为本申请的装置实施例,对于装置实施例中未详细描述的细节,可以结合参考上述方法实施例中相应的记载,本文不再赘述。The following is an apparatus embodiment of the present application. For details that are not described in detail in the apparatus embodiment, reference may be made to the corresponding records in the foregoing method embodiment, which will not be repeated herein.
图9示出了本申请的一个示例性实施例提供的图像拍摄装置的结构示意图。该装置可以通过软件、硬件或者两者的结合实现成为终端的全部或一部分,该装置包括:面型信息获取模块901、图像矫正模块902和图像输出模块903;FIG. 9 shows a schematic structural diagram of an image capturing apparatus provided by an exemplary embodiment of the present application. The device can be implemented as all or a part of the terminal through software, hardware or a combination of the two, and the device includes: a face shape information acquisition module 901, an image correction module 902 and an image output module 903;
所述面型信息获取模块901,用于获取所述屏幕玻璃盖板的面型信息;The surface type information acquisition module 901 is used to acquire the surface type information of the screen glass cover;
所述图像矫正模块902,用于基于所述屏幕玻璃盖板的面型信息,对所述镜头模组拍摄所得的原始图像进行图像矫正;The image correction module 902 is configured to perform image correction on the original image captured by the lens module based on the face shape information of the screen glass cover;
所述图像输出模块903,用于输出所述原始图像矫正后得到的矫正图像。The image output module 903 is configured to output the corrected image obtained after the original image is corrected.
在一个可选的实施例中,所述图像矫正模块902,用于获取所述原始图像的深度信息;基于所述深度信息,将所述原始图像分割出至少一个图像区域;基于所述屏幕玻璃盖板的面型信息,对所述至少一个图像区域进行图像矫正;所述图像输出模块903,用于将所述原始图像与矫正后的所述至少一个图像区域进行合成,得到所述矫正图像;输出所述矫正图像。In an optional embodiment, the image correction module 902 is configured to acquire depth information of the original image; based on the depth information, segment the original image into at least one image area; based on the screen glass The face shape information of the cover plate is used to perform image correction on the at least one image area; the image output module 903 is used for synthesizing the original image and the corrected at least one image area to obtain the corrected image ; output the corrected image.
在一个可选的实施例中,所述图像矫正模块902,用于所述镜头模组基于结构光探测技术,获取所述原始图像的深度信息;或,所述图像矫正模块902,用于所述镜头模组基于光飞行时间法,获取所述原始图像的深度信息。In an optional embodiment, the image correction module 902 is used for the lens module to obtain the depth information of the original image based on the structured light detection technology; or, the image correction module 902 is used for all The lens module obtains the depth information of the original image based on the time-of-flight method of light.
在一个可选的实施例中,所述图像矫正模块902,用于所述镜头模组发送第一探测条纹发射信号;所述镜头模组接收第一探测条纹反射信号,所述第一探测条纹反射信号是所述第一探测条纹发射信号经被拍摄物体反射而形成的信号;基于所述第一探测条纹反射信号与所述第一探测条纹发射信号之间的变化情况,获取所述原始图像的深度信息。In an optional embodiment, the image correction module 902 is used for the lens module to send a first detection fringe emission signal; the lens module receives a first detection fringe reflection signal, the first detection fringe The reflected signal is a signal formed by the reflection of the first detection fringe emission signal by the object to be photographed; the original image is acquired based on the change between the first detection fringe reflection signal and the first detection fringe emission signal depth information.
在一个可选的实施例中,所述图像矫正模块902,用于使用所述屏幕玻璃盖板的面型信息,对所述至少一个图像区域进行去卷积处理以进行图像矫正。In an optional embodiment, the image correction module 902 is configured to perform deconvolution processing on the at least one image area by using the face shape information of the screen glass cover to perform image correction.
在一个可选的实施例中,所述面型信息获取模块901,用于所述镜头模组基于结构光探测技术,获取所述屏幕玻璃盖板的面型信息;或,所述面型信息获取模块901,用于所述镜头模组基于光飞行时间法,获取所述屏幕玻璃盖板的面 型信息。In an optional embodiment, the surface type information acquisition module 901 is used for the lens module to acquire the surface type information of the screen glass cover plate based on the structured light detection technology; or, the surface type information The obtaining module 901 is used for the lens module to obtain the surface shape information of the screen glass cover based on the time-of-flight method of light.
在一个可选的实施例中,所述面型信息获取模块901,用于所述镜头模组发送第二探测条纹发射信号;所述镜头模组接收第二探测条纹反射信号,所述第二探测条纹反射信号是所述第二探测条纹发射信号经所述屏幕玻璃盖板反射而形成的信号;基于所述第二探测条纹反射信号与所述第二探测条纹发射信号之间的变化情况,获取所述屏幕玻璃盖板的面型信息。In an optional embodiment, the surface information acquisition module 901 is used for the lens module to send the second detection fringe emission signal; the lens module receives the second detection fringe reflection signal, the second detection fringe reflection signal, the second detection fringe reflection signal. The detection stripe reflection signal is a signal formed by the reflection of the second detection stripe emission signal by the screen glass cover; based on the change between the second detection stripe reflection signal and the second detection stripe emission signal, Obtain the surface shape information of the screen glass cover.
在一个可选的实施例中,装置还包括调焦模块;所述调焦模块,用于获取参考距离,所述参考距离是被拍摄物体至所述镜头模组之间的距离;基于所述参考距离,对所述镜头模组进行调焦;使用调焦后的所述镜头模组进行图像拍摄,得到所述原始图像。In an optional embodiment, the device further includes a focusing module; the focusing module is configured to obtain a reference distance, where the reference distance is the distance from the object to be photographed to the lens module; based on the Focusing on the lens module with reference to the distance; using the focus-adjusted lens module to capture an image to obtain the original image.
在一个可选的实施例中,所述调焦模块,用于所述镜头模组基于结构光探测技术,获取所述参考距离;或,所述调焦模块,用于所述镜头模组基于光飞行时间法,获取所述参考距离。In an optional embodiment, the focusing module is used for the lens module to obtain the reference distance based on structured light detection technology; or, the focusing module is used for the lens module to obtain the reference distance based on The time-of-flight method is used to obtain the reference distance.
在一个可选的实施例中,所述调焦模块,用于所述镜头模组发送第三探测条纹发射信号;所述镜头模组接收第三探测条纹反射信号,所述第三探测条纹反射信号是所述第三探测条纹发射信号经所述被拍摄物体反射而形成的信号;基于第一往返时间,确定所述参考距离;其中,所述第一往返时间是所述镜头模组发射所述第三探测条纹发射信号的时间点与接收所述第三探测条纹反射信号的时间点之间的差值。In an optional embodiment, the focusing module is used for the lens module to send a third detection fringe emission signal; the lens module receives a third detection fringe reflection signal, and the third detection fringe reflects The signal is a signal formed by the reflection of the third detection stripe emission signal by the object to be photographed; the reference distance is determined based on the first round-trip time; wherein, the first round-trip time is the transmission time of the lens module. The difference between the time point when the third detection stripe transmits the signal and the time point when the third detection stripe reflection signal is received.
在一个可选的实施例中,所述调焦模块,用于响应于得到至少两个所述参考距离,基于注意力机制对至少两个所述参考距离进行处理,得到目标参考距离;基于所述目标参考距离,对所述镜头模组进行调焦。In an optional embodiment, the focusing module is configured to, in response to obtaining the at least two reference distances, process the at least two reference distances based on an attention mechanism to obtain the target reference distance; According to the target reference distance, focus on the lens module.
图10示出了本申请一个示例性实施例提供的终端1000的结构框图。该终端1000可以是便携式移动终端,比如:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端1000还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。FIG. 10 shows a structural block diagram of a terminal 1000 provided by an exemplary embodiment of the present application. The terminal 1000 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, a moving picture expert compression standard audio layer 3), an MP4 (Moving Picture Experts Group Audio Layer IV, a dynamic image expert Video Expert Compresses Standard Audio Layer 4) Player, Laptop or Desktop. Terminal 1000 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and the like by other names.
在本申请实施例中,终端1000包括有:处理器1001、存储器1002、外围设备接口1003和镜头模组1006。In this embodiment of the present application, the terminal 1000 includes: a processor 1001 , a memory 1002 , a peripheral device interface 1003 and a lens module 1006 .
处理器1001可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1001可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1001也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1001可以集成有GPU(Graphics Processing Unit,图像处理器), GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1001还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。The processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1001 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish. The processor 1001 may also include a main processor and a coprocessor. The main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen. In some embodiments, the processor 1001 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
存储器1002可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1002还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1002中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1001所执行以实现本申请中方法实施例提供的图像拍摄方法。 Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. Memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1002 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1001 to realize the image capturing provided by the method embodiments in this application. method.
外围设备接口1003可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1001和存储器1002。在一些实施例中,处理器1001、存储器1002和外围设备接口1003被集成在同一芯片或电路板上;在一些其他实施例中,处理器1001、存储器1002和外围设备接口1003中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。The peripheral device interface 1003 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1001 and the memory 1002 . In some embodiments, processor 1001, memory 1002, and peripherals interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 1001, memory 1002, and peripherals interface 1003 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
处理器1001、存储器1002和外围设备接口1003之间可以通过总线或信号线相连。镜头模组1006可以通过总线、信号线或电路板与外围设备接口1003相连。The processor 1001, the memory 1002 and the peripheral device interface 1003 may be connected through a bus or a signal line. The lens module 1006 can be connected to the peripheral device interface 1003 through a bus, a signal line or a circuit board.
镜头模组1006用于采集图像或视频。可选地,镜头模组1006包括:摄像头、以所述摄像头的光轴为对称轴而对称设置的探测信号发射端和图像传感器;所述摄像头用于进行图像拍摄;所述探测信号发射端用于发射探测发射信号;所述图像传感器用于接收探测反射信号,所述探测反射信号是所述探测发射信号经反射而形成的信号。摄像头包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,镜头模组1006还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。The lens module 1006 is used to capture images or videos. Optionally, the lens module 1006 includes: a camera, a detection signal transmitter and an image sensor that are symmetrically arranged with the optical axis of the camera as an axis of symmetry; the camera is used for image capturing; the detection signal transmitter is used for The image sensor is used to receive a detection reflection signal, and the detection reflection signal is a signal formed by the reflection of the detection transmission signal. The cameras include a front-facing camera and a rear-facing camera. Usually, the front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, there are at least two rear cameras, which are any one of a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, the main camera It is integrated with the wide-angle camera to achieve panoramic shooting and VR (Virtual Reality, virtual reality) shooting functions or other integrated shooting functions. In some embodiments, the lens module 1006 may also include a flash. The flash can be a single color temperature flash or a dual color temperature flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
在一些实施例中,终端1000还包括有:除镜头模组1006之外的其他外围设备。各个外围设备可以通过总线、信号线或电路板与外围设备接口1003相连。具体地,其他外围设备包括:射频电路1004、显示屏1005、音频电路1007、定位组件1008和电源1009中的至少一种。In some embodiments, the terminal 1000 further includes other peripheral devices other than the lens module 1006 . Each peripheral device can be connected to the peripheral device interface 1003 through a bus, a signal line or a circuit board. Specifically, other peripheral devices include: at least one of a radio frequency circuit 1004 , a display screen 1005 , an audio circuit 1007 , a positioning component 1008 and a power supply 1009 .
射频电路1004用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1004通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1004将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1004包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1004可以通过至少一种无线通信协议来与其它终端进行通 信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1004还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。The radio frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals. The radio frequency circuit 1004 communicates with the communication network and other communication devices through electromagnetic signals. The radio frequency circuit 1004 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuit 1004 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like. The radio frequency circuit 1004 can communicate with other terminals through at least one wireless communication protocol. The wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G and 5G), wireless local area network and/or WiFi (Wireless Fidelity, Wireless Fidelity) network. In some embodiments, the radio frequency circuit 1004 may further include a circuit related to NFC (Near Field Communication, short-range wireless communication), which is not limited in this application.
显示屏1005用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1005是触摸显示屏时,显示屏1005还具有采集在显示屏1005的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1001进行处理。此时,显示屏1005还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1005可以为一个,设置在终端1000的前面板;在另一些实施例中,显示屏1005可以为至少两个,分别设置在终端1000的不同表面或呈折叠设计;在另一些实施例中,显示屏1005可以是柔性显示屏,设置在终端1000的弯曲表面上或折叠面上。甚至,显示屏1005还可以设置成非矩形的不规则图形,也即异形屏。显示屏1005可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。The display screen 1005 is used for displaying UI (User Interface, user interface). The UI can include graphics, text, icons, video, and any combination thereof. When the display screen 1005 is a touch display screen, the display screen 1005 also has the ability to acquire touch signals on or above the surface of the display screen 1005 . The touch signal can be input to the processor 1001 as a control signal for processing. At this time, the display screen 1005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, there may be one display screen 1005, which is arranged on the front panel of the terminal 1000; in other embodiments, there may be at least two display screens 1005, which are respectively arranged on different surfaces of the terminal 1000 or in a folded design; In other embodiments, the display screen 1005 may be a flexible display screen, which is disposed on a curved surface or a folding surface of the terminal 1000 . Even, the display screen 1005 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen. The display screen 1005 can be prepared by using materials such as LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light emitting diode).
音频电路1007可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1001进行处理,或者输入至射频电路1004以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端1000的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1001或射频电路1004的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1007还可以包括耳机插孔。 Audio circuitry 1007 may include a microphone and speakers. The microphone is used to collect the sound waves of the user and the environment, convert the sound waves into electrical signals, and input them to the processor 1001 for processing, or to the radio frequency circuit 1004 to realize voice communication. For the purpose of stereo collection or noise reduction, there may be multiple microphones, which are respectively disposed in different parts of the terminal 1000 . The microphone may also be an array microphone or an omnidirectional collection microphone. The speaker is used to convert the electrical signal from the processor 1001 or the radio frequency circuit 1004 into sound waves. The loudspeaker can be a traditional thin-film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes. In some embodiments, the audio circuit 1007 may also include a headphone jack.
定位组件1008用于定位终端1000的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1008可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。The positioning component 1008 is used to locate the current geographic location of the terminal 1000 to implement navigation or LBS (Location Based Service). The positioning component 1008 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China or the Galileo system of Russia.
电源1009用于为终端1000中的各个组件进行供电。电源1009可以是交流电、直流电、一次性电池或可充电电池。当电源1009包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。The power supply 1009 is used to power various components in the terminal 1000 . The power source 1009 may be alternating current, direct current, disposable batteries or rechargeable batteries. When the power source 1009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. Wired rechargeable batteries are batteries that are charged through wired lines, and wireless rechargeable batteries are batteries that are charged through wireless coils. The rechargeable battery can also be used to support fast charging technology.
在一些实施例中,终端1000还包括有一个或多个传感器1010。该一个或多个传感器1010包括但不限于:加速度传感器1011、陀螺仪传感器1012、压力传感器1013、指纹传感器1014、光学传感器1015以及接近传感器1016。In some embodiments, the terminal 1000 further includes one or more sensors 1010 . The one or more sensors 1010 include, but are not limited to, an acceleration sensor 1011 , a gyro sensor 1012 , a pressure sensor 1013 , a fingerprint sensor 1014 , an optical sensor 1015 and a proximity sensor 1016 .
加速度传感器1011可以检测以终端1000建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1011可以用于检测重力加速度在三个坐标轴上的分量。处理器1001可以根据加速度传感器1011采集的重力加速度信号, 控制显示屏1005以横向视图或纵向视图进行用户界面的显示。加速度传感器1011还可以用于游戏或者用户的运动数据的采集。The acceleration sensor 1011 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 1000 . For example, the acceleration sensor 1011 can be used to detect the components of the gravitational acceleration on the three coordinate axes. The processor 1001 can control the display screen 1005 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011 . The acceleration sensor 1011 can also be used for game or user movement data collection.
陀螺仪传感器1012可以检测终端1000的机体方向及转动角度,陀螺仪传感器1012可以与加速度传感器1011协同采集用户对终端1000的3D动作。处理器1001根据陀螺仪传感器1012采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。The gyroscope sensor 1012 can detect the body direction and rotation angle of the terminal 1000 , and the gyroscope sensor 1012 can cooperate with the acceleration sensor 1011 to collect 3D actions of the user on the terminal 1000 . The processor 1001 can implement the following functions according to the data collected by the gyro sensor 1012: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
压力传感器1013可以设置在终端1000的侧边框和/或显示屏1005的下层。当压力传感器1013设置在终端1000的侧边框时,可以检测用户对终端1000的握持信号,由处理器1001根据压力传感器1013采集的握持信号进行左右手识别或快捷操作。当压力传感器1013设置在显示屏1005的下层时,由处理器1001根据用户对显示屏1005的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。The pressure sensor 1013 may be disposed on the side frame of the terminal 1000 and/or the lower layer of the display screen 1005 . When the pressure sensor 1013 is disposed on the side frame of the terminal 1000, the user's holding signal of the terminal 1000 can be detected, and the processor 1001 performs left and right hand identification or shortcut operations according to the holding signal collected by the pressure sensor 1013. When the pressure sensor 1013 is disposed on the lower layer of the display screen 1005, the processor 1001 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 1005. The operability controls include at least one of button controls, scroll bar controls, icon controls, and menu controls.
指纹传感器1014用于采集用户的指纹,由处理器1001根据指纹传感器1014采集到的指纹识别用户的身份,或者,由指纹传感器1014根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1001授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1014可以被设置在终端1000的正面、背面或侧面。当终端1000上设置有物理按键或厂商Logo时,指纹传感器1014可以与物理按键或厂商Logo集成在一起。The fingerprint sensor 1014 is used to collect the user's fingerprint, and the processor 1001 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user's identity according to the collected fingerprint. When the user's identity is identified as a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, making payments, and changing settings. The fingerprint sensor 1014 may be disposed on the front, back or side of the terminal 1000 . When the terminal 1000 is provided with physical buttons or a manufacturer's logo, the fingerprint sensor 1014 may be integrated with the physical buttons or the manufacturer's logo.
光学传感器1015用于采集环境光强度。在一个实施例中,处理器1001可以根据光学传感器1015采集的环境光强度,控制显示屏1005的显示亮度。具体地,当环境光强度较高时,调高显示屏1005的显示亮度;当环境光强度较低时,调低显示屏1005的显示亮度。在另一个实施例中,处理器1001还可以根据光学传感器1015采集的环境光强度,动态调整摄像头组件1006的拍摄参数。The optical sensor 1015 is used to collect ambient light intensity. In one embodiment, the processor 1001 can control the display brightness of the display screen 1005 according to the ambient light intensity collected by the optical sensor 1015 . Specifically, when the ambient light intensity is high, the display brightness of the display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the display screen 1005 is decreased. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the ambient light intensity collected by the optical sensor 1015 .
接近传感器1016,也称距离传感器,通常设置在终端1000的前面板。接近传感器1016用于采集用户与终端1000的正面之间的距离。在一个实施例中,当接近传感器1016检测到用户与终端1000的正面之间的距离逐渐变小时,由处理器1001控制显示屏1005从亮屏状态切换为息屏状态;当接近传感器1016检测到用户与终端1000的正面之间的距离逐渐变大时,由处理器1001控制显示屏1005从息屏状态切换为亮屏状态。A proximity sensor 1016 , also called a distance sensor, is usually disposed on the front panel of the terminal 1000 . The proximity sensor 1016 is used to collect the distance between the user and the front of the terminal 1000 . In one embodiment, when the proximity sensor 1016 detects that the distance between the user and the front of the terminal 1000 gradually decreases, the processor 1001 controls the display screen 1005 to switch from the bright screen state to the off screen state; when the proximity sensor 1016 detects When the distance between the user and the front of the terminal 1000 gradually increases, the processor 1001 controls the display screen 1005 to switch from the screen-off state to the screen-on state.
本领域技术人员可以理解,图10中示出的结构并不构成对终端1000的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。Those skilled in the art can understand that the structure shown in FIG. 10 does not constitute a limitation on the terminal 1000, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
本申请还提供一种计算机可读存储介质,该存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现上述各方法实施例提供的图像拍摄方法。The present application also provides a computer-readable storage medium, which stores at least one instruction, at least one piece of program, code set or instruction set, and the at least one instruction, at least one piece of program, code set or instruction set is loaded by a processor And execute the image capturing method provided by the above method embodiments.
本申请还提供一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述可选实现方式中提供的图像拍摄方法。The present application also provides a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the image capturing method provided in the foregoing optional implementation manner.
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。It should be understood that references herein to "a plurality" means two or more. "And/or", which describes the association relationship of the associated objects, means that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone. The character "/" generally indicates that the associated objects are an "or" relationship.
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。Those of ordinary skill in the art can understand that all or part of the steps of implementing the above embodiments can be completed by hardware, or can be completed by instructing relevant hardware through a program, and the program can be stored in a computer-readable storage medium. The storage medium can be read-only memory, magnetic disk or optical disk, etc.
以上仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above are only optional embodiments of the present application, and are not intended to limit the present application. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present application shall be included in the protection scope of the present application. Inside.

Claims (19)

  1. 一种图像拍摄方法,其中,应用于终端中,所述终端包括设置于屏幕玻璃盖板下的镜头模组,所述方法包括:An image capturing method, wherein, when applied to a terminal, the terminal includes a lens module disposed under a screen glass cover, and the method includes:
    获取所述屏幕玻璃盖板的面型信息;obtaining the surface shape information of the screen glass cover;
    基于所述屏幕玻璃盖板的面型信息,对所述镜头模组拍摄所得的原始图像进行图像矫正;Perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover;
    输出所述原始图像矫正后得到的矫正图像。A corrected image obtained after the original image is corrected is output.
  2. 根据权利要求1所述的方法,其中,所述基于所述屏幕玻璃盖板的面型信息,对所述镜头模组拍摄所得的原始图像进行图像矫正,包括:The method according to claim 1, wherein the performing image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover plate comprises:
    获取所述原始图像的深度信息;obtain the depth information of the original image;
    基于所述深度信息,将所述原始图像分割出至少一个图像区域;dividing the original image into at least one image region based on the depth information;
    基于所述屏幕玻璃盖板的面型信息,对所述至少一个图像区域进行图像矫正;performing image correction on the at least one image area based on the surface shape information of the screen glass cover;
    所述输出所述原始图像矫正后得到的矫正图像,包括:The outputting the corrected image obtained after the original image is corrected, including:
    将所述原始图像与矫正后的所述至少一个图像区域进行合成,得到所述矫正图像;Combining the original image and the corrected at least one image region to obtain the corrected image;
    输出所述矫正图像。The corrected image is output.
  3. 根据权利要求2所述的方法,其中,所述获取所述原始图像的深度信息,包括:The method according to claim 2, wherein the acquiring the depth information of the original image comprises:
    所述镜头模组基于结构光探测技术,获取所述原始图像的深度信息;The lens module obtains the depth information of the original image based on the structured light detection technology;
    或,or,
    所述镜头模组基于光飞行时间法,获取所述原始图像的深度信息。The lens module obtains the depth information of the original image based on the time-of-flight method of light.
  4. 根据权利要求3所述的方法,其中,所述镜头模组基于结构光探测技术,获取所述原始图像的深度信息,包括:The method according to claim 3, wherein the lens module obtains the depth information of the original image based on a structured light detection technology, comprising:
    所述镜头模组发送第一探测条纹发射信号;the lens module sends a first detection stripe emission signal;
    所述镜头模组接收第一探测条纹反射信号,所述第一探测条纹反射信号是所述第一探测条纹发射信号经被拍摄物体反射而形成的信号;The lens module receives a first detection fringe reflection signal, and the first detection fringe reflection signal is a signal formed by the first detection fringe emission signal reflected by the object to be photographed;
    基于所述第一探测条纹反射信号与所述第一探测条纹发射信号之间的变化情况,获取所述原始图像的深度信息。The depth information of the original image is acquired based on the change between the reflection signal of the first detection fringe and the emission signal of the first detection fringe.
  5. 根据权利要求2所述的方法,其中,所述基于所述屏幕玻璃盖板的面型信息,对所述至少一个图像区域进行图像矫正,包括:The method according to claim 2, wherein the performing image correction on the at least one image area based on the surface shape information of the screen glass cover plate comprises:
    使用所述屏幕玻璃盖板的面型信息,对所述至少一个图像区域进行去卷积处理以进行图像矫正。Deconvolution processing is performed on the at least one image area to perform image correction using the surface shape information of the screen glass cover.
  6. 根据权利要求1至5任一所述的方法,其中,所述获取所述屏幕玻璃盖板的面型信息,包括:The method according to any one of claims 1 to 5, wherein the acquiring the surface shape information of the screen glass cover plate comprises:
    所述镜头模组基于结构光探测技术,获取所述屏幕玻璃盖板的面型信息;The lens module obtains the surface shape information of the screen glass cover plate based on the structured light detection technology;
    或,or,
    所述镜头模组基于光飞行时间法,获取所述屏幕玻璃盖板的面型信息。The lens module obtains the surface shape information of the screen glass cover plate based on the time-of-flight method of light.
  7. 根据权利要求6所述的方法,其中,所述镜头模组基于结构光探测技术,获取所述屏幕玻璃盖板的面型信息,包括:The method according to claim 6, wherein the lens module obtains the surface shape information of the screen glass cover plate based on a structured light detection technology, comprising:
    所述镜头模组发送第二探测条纹发射信号;The lens module sends a second detection stripe emission signal;
    所述镜头模组接收第二探测条纹反射信号,所述第二探测条纹反射信号是所述第二探测条纹发射信号经所述屏幕玻璃盖板反射而形成的信号;The lens module receives a second detection fringe reflection signal, and the second detection fringe reflection signal is a signal formed by the reflection of the second detection fringe emission signal by the screen glass cover;
    基于所述第二探测条纹反射信号与所述第二探测条纹发射信号之间的变化情况,获取所述屏幕玻璃盖板的面型信息。Based on the change between the reflection signal of the second detection stripe and the emission signal of the second detection stripe, the surface shape information of the screen glass cover is acquired.
  8. 根据权利要求1至7任一所述的方法,其中,所述方法还包括:The method according to any one of claims 1 to 7, wherein the method further comprises:
    获取参考距离,所述参考距离是被拍摄物体至所述镜头模组之间的距离;Obtain a reference distance, where the reference distance is the distance from the object to be photographed to the lens module;
    基于所述参考距离,对所述镜头模组进行调焦;Focusing on the lens module based on the reference distance;
    使用调焦后的所述镜头模组进行图像拍摄,得到所述原始图像。The original image is obtained by using the lens module after focusing to capture an image.
  9. 根据权利要求8所述的方法,其中,所述获取参考距离,包括:The method according to claim 8, wherein the obtaining the reference distance comprises:
    所述镜头模组基于结构光探测技术,获取所述参考距离;The lens module obtains the reference distance based on the structured light detection technology;
    或,or,
    所述镜头模组基于光飞行时间法,获取所述参考距离。The lens module obtains the reference distance based on the time-of-flight method of light.
  10. 根据权利要求9所述的方法,其中,所述镜头模组基于结构光探测技术,获取所述参考距离,包括:The method according to claim 9, wherein the lens module obtains the reference distance based on a structured light detection technology, comprising:
    所述镜头模组发送第三探测条纹发射信号;The lens module sends a third detection stripe emission signal;
    所述镜头模组接收第三探测条纹反射信号,所述第三探测条纹反射信号是所述第三探测条纹发射信号经所述被拍摄物体反射而形成的信号;The lens module receives a third detection fringe reflection signal, and the third detection fringe reflection signal is a signal formed by the reflection of the third detection fringe emission signal by the object to be photographed;
    基于第一往返时间,确定所述参考距离;determining the reference distance based on the first round trip time;
    其中,所述第一往返时间是所述镜头模组发射所述第三探测条纹发射信号的时间点与接收所述第三探测条纹反射信号的时间点之间的差值。The first round-trip time is a difference between a time point when the lens module transmits the third detection stripe emission signal and a time point when the third detection stripe reflection signal is received.
  11. 根据权利要求8所述的方法,其中,基于所述参考距离,对所述镜头模组进行调焦,包括:The method according to claim 8, wherein, based on the reference distance, adjusting the focus of the lens module comprises:
    响应于得到至少两个所述参考距离,基于注意力机制对至少两个所述参考距离进行处理,得到目标参考距离;In response to obtaining at least two of the reference distances, processing at least two of the reference distances based on an attention mechanism to obtain a target reference distance;
    基于所述目标参考距离,对所述镜头模组进行调焦。Focusing on the lens module based on the target reference distance.
  12. 一种终端,其中,所述终端包括:处理器,存储器和设置于屏幕玻璃盖板下的镜头模组;A terminal, wherein the terminal includes: a processor, a memory, and a lens module disposed under a screen glass cover;
    所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行,以实现如权利要求1至11任一所述的图像拍摄方法。At least one instruction, at least one piece of program, code set or instruction set is stored in the memory, and the at least one instruction, the at least one piece of program, the code set or the instruction set is loaded and executed by the processor to achieve The image capturing method according to any one of claims 1 to 11.
  13. 根据权利要求12所述的终端,其中,The terminal of claim 12, wherein,
    所述屏幕玻璃盖板为曲面屏幕玻璃盖板。The screen glass cover is a curved screen glass cover.
  14. 根据权利要求12或13所述的终端,其中,所述镜头模组包括:摄像头、以所述摄像头的光轴为对称轴而对称设置的探测信号发射端和图像传感器;The terminal according to claim 12 or 13, wherein the lens module comprises: a camera, a detection signal transmitter and an image sensor symmetrically arranged with an optical axis of the camera as a symmetry axis;
    所述摄像头用于进行图像拍摄;The camera is used for image capturing;
    所述探测信号发射端用于发射探测发射信号;The detection signal transmitter is used for transmitting a detection transmission signal;
    所述图像传感器用于接收探测反射信号,所述探测反射信号是所述探测发射信号经反射而形成的信号。The image sensor is used for receiving a detection reflection signal, and the detection reflection signal is a signal formed by reflecting the detection emission signal.
  15. 根据权利要求14所述的终端,其中,The terminal of claim 14, wherein,
    所述探测发射信号包括探测条纹发射信号,所述探测反射信号包括探测条纹反射信号。The detection emission signals include detection fringe emission signals, and the detection reflection signals include detection fringe reflection signals.
  16. 一种图像拍摄装置,其中,所述装置包括:面型信息获取模块、图像矫正模块和图像输出模块;An image capturing device, wherein the device comprises: a face shape information acquisition module, an image correction module and an image output module;
    所述面型信息获取模块,用于获取屏幕玻璃盖板的面型信息;The surface shape information acquisition module is used to obtain the surface shape information of the screen glass cover;
    所述图像矫正模块,用于基于所述屏幕玻璃盖板的面型信息,对镜头模组拍摄所得的原始图像进行图像矫正;The image correction module is used to perform image correction on the original image captured by the lens module based on the surface shape information of the screen glass cover;
    所述图像输出模块,用于输出所述原始图像矫正后得到的矫正图像。The image output module is configured to output the corrected image obtained after the original image is corrected.
  17. 一种计算机可读存储介质,其中,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行,以实现如权利要求1至11任一项所述的图像拍摄方法。A computer-readable storage medium, wherein the storage medium stores at least one instruction, at least one piece of program, code set or instruction set, the at least one instruction, the at least one piece of program, the code set or the instruction set Loaded and executed by the processor to realize the image capturing method according to any one of claims 1 to 11.
  18. 一种芯片,其中,所述芯片包括可编程逻辑电路和/或程序指令,当所述芯片运行时,用于实现如权利要求1至11任一项所述的图像拍摄方法。A chip, wherein the chip includes a programmable logic circuit and/or program instructions, which are used to implement the image capturing method according to any one of claims 1 to 11 when the chip runs.
  19. 一种计算机程序产品,其中,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现如权利要求1至11任一项所述的图像拍摄方法。A computer program product, wherein the computer program product includes computer instructions stored in a computer-readable storage medium from which a processor reads and executes the computer instructions to The image capturing method according to any one of claims 1 to 11 is realized.
PCT/CN2022/080664 2021-04-30 2022-03-14 Image photographing method and device, terminal and storage medium WO2022227893A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110478495.9 2021-04-30
CN202110478495.9A CN113191976B (en) 2021-04-30 2021-04-30 Image shooting method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
WO2022227893A1 true WO2022227893A1 (en) 2022-11-03

Family

ID=76982857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080664 WO2022227893A1 (en) 2021-04-30 2022-03-14 Image photographing method and device, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN113191976B (en)
WO (1) WO2022227893A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191976B (en) * 2021-04-30 2024-03-22 Oppo广东移动通信有限公司 Image shooting method, device, terminal and storage medium
CN114666509A (en) * 2022-04-08 2022-06-24 Oppo广东移动通信有限公司 Image acquisition method and device, detection module, terminal and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190371219A1 (en) * 2018-06-04 2019-12-05 Acer Incorporated Demura system for non-planar screen
CN111274849A (en) * 2018-12-04 2020-06-12 上海耕岩智能科技有限公司 Method for determining imaging proportion of curved screen, storage medium and electronic equipment
CN111722816A (en) * 2019-03-19 2020-09-29 上海耕岩智能科技有限公司 Method for determining imaging ratio of bendable screen, electronic device and storage medium
CN112130800A (en) * 2020-09-29 2020-12-25 Oppo广东移动通信有限公司 Image processing method, electronic device, apparatus, and storage medium
CN112232155A (en) * 2020-09-30 2021-01-15 墨奇科技(北京)有限公司 Non-contact fingerprint identification method and device, terminal and storage medium
CN112651286A (en) * 2019-10-11 2021-04-13 西安交通大学 Three-dimensional depth sensing device and method based on transparent screen
CN113191976A (en) * 2021-04-30 2021-07-30 Oppo广东移动通信有限公司 Image shooting method, device, terminal and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8306348B2 (en) * 2007-04-24 2012-11-06 DigitalOptics Corporation Europe Limited Techniques for adjusting the effect of applying kernels to signals to achieve desired effect on signal
CN112004054A (en) * 2020-07-29 2020-11-27 深圳宏芯宇电子股份有限公司 Multi-azimuth monitoring method, equipment and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190371219A1 (en) * 2018-06-04 2019-12-05 Acer Incorporated Demura system for non-planar screen
CN111274849A (en) * 2018-12-04 2020-06-12 上海耕岩智能科技有限公司 Method for determining imaging proportion of curved screen, storage medium and electronic equipment
CN111722816A (en) * 2019-03-19 2020-09-29 上海耕岩智能科技有限公司 Method for determining imaging ratio of bendable screen, electronic device and storage medium
CN112651286A (en) * 2019-10-11 2021-04-13 西安交通大学 Three-dimensional depth sensing device and method based on transparent screen
CN112130800A (en) * 2020-09-29 2020-12-25 Oppo广东移动通信有限公司 Image processing method, electronic device, apparatus, and storage medium
CN112232155A (en) * 2020-09-30 2021-01-15 墨奇科技(北京)有限公司 Non-contact fingerprint identification method and device, terminal and storage medium
CN113191976A (en) * 2021-04-30 2021-07-30 Oppo广东移动通信有限公司 Image shooting method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN113191976B (en) 2024-03-22
CN113191976A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN110488977B (en) Virtual reality display method, device and system and storage medium
US11517099B2 (en) Method for processing images, electronic device, and storage medium
WO2022227893A1 (en) Image photographing method and device, terminal and storage medium
CN107248137B (en) Method for realizing image processing and mobile terminal
US11962897B2 (en) Camera movement control method and apparatus, device, and storage medium
CN109905603B (en) Shooting processing method and mobile terminal
WO2022100265A1 (en) Camera calibration method, apparatus, and system
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN111028144B (en) Video face changing method and device and storage medium
WO2021238564A1 (en) Display device and distortion parameter determination method, apparatus and system thereof, and storage medium
CN108718388B (en) Photographing method and mobile terminal
CN113384880A (en) Virtual scene display method and device, computer equipment and storage medium
US11915667B2 (en) Method and system for displaying corrected image, and display device
CN112396076A (en) License plate image generation method and device and computer storage medium
KR20160031819A (en) Mobile terminal and method for controlling the same
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN111127539B (en) Parallax determination method and device, computer equipment and storage medium
WO2021104262A1 (en) Electronic device and focusing method
CN110443841B (en) Method, device and system for measuring ground depth
CN110672036B (en) Method and device for determining projection area
CN113012211A (en) Image acquisition method, device, system, computer equipment and storage medium
CN114093020A (en) Motion capture method, motion capture device, electronic device and storage medium
CN112184802A (en) Calibration frame adjusting method and device and storage medium
CN112150554B (en) Picture display method, device, terminal and storage medium
CN113065457B (en) Face detection point processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794371

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794371

Country of ref document: EP

Kind code of ref document: A1