WO2022156683A1 - 图像处理方法、装置、拍摄支架、电子设备及可读存储介质 - Google Patents
图像处理方法、装置、拍摄支架、电子设备及可读存储介质 Download PDFInfo
- Publication number
- WO2022156683A1 WO2022156683A1 PCT/CN2022/072577 CN2022072577W WO2022156683A1 WO 2022156683 A1 WO2022156683 A1 WO 2022156683A1 CN 2022072577 W CN2022072577 W CN 2022072577W WO 2022156683 A1 WO2022156683 A1 WO 2022156683A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- camera
- images
- target sample
- target
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 88
- 238000000034 method Methods 0.000 claims abstract description 61
- 238000012549 training Methods 0.000 claims abstract description 25
- 230000007613 environmental effect Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims 1
- 238000004891 communication Methods 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 125000001475 halogen functional group Chemical group 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Definitions
- the embodiments of the present application relate to the field of communication technologies, and in particular, to an image processing method, an apparatus, a photographing stand, an electronic device, and a readable storage medium.
- design schemes such as "hole-digging screen” and “water drop screen” are usually adopted to reduce the influence of the front camera on the screen-to-screen ratio. What's more, the design of the under-screen camera has greatly improved the screen-to-body ratio of electronic devices.
- the camera is located below the screen, and is limited by the occlusion of the screen, and the image quality of the image captured by the under-screen camera is poor.
- the purpose of the embodiments of the present application is to provide an image processing method, an apparatus, a shooting stand, an electronic device and a readable storage medium, which can solve the problem of poor image quality of an image captured by an under-screen camera.
- an embodiment of the present application provides an image processing method, the method includes: collecting a first image through a first camera, where the first camera is an off-screen camera; using an image processing model to process the first image to obtain a second image, The image quality of the second image is higher than that of the first image; wherein, the image processing model is obtained after training the preset model with the target sample set; each target sample of the target sample set includes two images; The two images in the sample are the images collected by the second camera and the third camera at the same position using the same shooting angle, the same shooting environment, the same shooting parameters, and the same shooting object, and the images collected by the second camera are the same.
- the image quality is lower than that of the image collected by the third camera; the second camera is an off-screen camera.
- the embodiments of the present application further provide an image processing device, the device includes an acquisition module and a processing module; the acquisition module is used to acquire a first image through a first camera, where the first camera is an under-screen camera; the processing module , for using the image processing model to process the first image collected by the acquisition module, to obtain a second image, the image quality of the second image is higher than that of the first image; wherein, the image processing model is to use the target sample set for the preset model Obtained after training; each target sample of the target sample set includes two images; the two images in each target sample are the second camera and the third camera at the same position with the same shooting angle, the same shooting environment, The same shooting parameters, images collected for the same shooting object, and the image quality of the image collected by the second camera is lower than the image quality of the image collected by the third camera; the second camera is an off-screen camera.
- an embodiment of the present application provides a shooting stand, including: a stand, a slide rail connected to the stand, and a first pitch stage and a second pitch stage arranged on the slide rail, and the first pitch stand is used to support
- the first camera and the second tilting stage are used to support the second camera;
- the first camera is used to collect the first target image;
- the second camera is used to collect the second target image;
- the first target image and the second target image are:
- the first camera and the second camera use the same shooting angle, the same shooting environment, the same shooting parameters, and the images collected for the same shooting object at the same position;
- the first target image and the second target image are a sample in the target sample set , the target sample set is used to train the preset model.
- an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, and the program or instruction is implemented when executed by the processor The steps of the image processing method according to the first aspect.
- an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method according to the first aspect are implemented .
- an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the first aspect the method described.
- the preset model is trained by using the images captured by the under-screen camera (ie the second camera) and the normal camera (ie the third camera) at the same position and at the same shooting angle as the samples in the target sample set. Afterwards, the trained image processing model is used to process the first image collected by the first camera to obtain a second image with higher image quality, which improves the image quality of photos taken by the under-screen camera.
- Fig. 1 is a kind of electronic device that adopts the solution of the camera under the screen provided by the embodiment of the present application;
- FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of image segmentation applied by an image processing method provided in an embodiment of the present application.
- FIG. 4 is a schematic structural diagram of a photographing support provided by an embodiment of the present application.
- FIG. 5 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
- FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- first, second and the like in the description and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and distinguish between “first”, “second”, etc.
- the objects are usually of one type, and the number of objects is not limited.
- the first object may be one or more than one.
- “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the associated objects are in an "or” relationship.
- the image processing method provided by the embodiment of the present application can be applied to a scene in which an electronic device shoots with an under-screen camera.
- FIG. 1(A) Exemplarily, for the scene that the electronic device shoots through the under-screen camera, in the related art, as shown in FIG. 1(A), it is a design scheme in which the camera is located under the screen. The camera settings are at the bottom of the screen.
- the under-screen camera captures images, the light will first pass through the screen. Due to the screen blocking the light and the diffraction phenomenon occurring when the light passes through the object, the image quality of the under-screen camera is poor ( For example, the picture is dark, or there is a halo, etc.).
- the images collected by the under-screen camera and the normal camera at the same position with the same shooting angle are taken as the samples in the sample set, and then, by changing the shooting conditions, or more
- the method of replacing the photographed object every time obtains a sample set containing N samples, and uses the sample set to train the preset model.
- the trained image processing model is used to process the image collected by the under-screen camera, to obtain an image with higher image quality, which improves the image quality of the photo taken by the under-screen camera.
- an image processing method provided by an embodiment of the present application may include the following steps 201 and 202:
- Step 201 The image processing apparatus collects a first image through a first camera.
- the above-mentioned first camera is an off-screen camera.
- Step 202 The image processing apparatus processes the above-mentioned first image by using an image processing model to obtain a second image.
- the image quality of the second image is higher than that of the first image.
- the above image processing model is obtained after training the preset model with the target sample set.
- Each target sample of the above target sample set includes two images.
- the two images in each target sample are images collected by the second camera and the third camera at the same position using the same shooting angle, the same shooting environment, the same shooting parameters, and the same shooting object, and the second The image quality of the image collected by the camera is lower than that of the image collected by the third camera.
- the second camera is an off-screen camera.
- the second camera and the first camera may be the same camera, or may be different cameras. Specifically, it can be cameras on different electronic devices.
- the above-mentioned second camera and third camera can use the shooting bracket provided by the embodiment of the present application to obtain the second camera and the third camera at the same position using the same shooting angle, the same shooting environment, and the same shooting parameters.
- the above-mentioned third camera may be a camera with the same specifications as the above-mentioned first camera or the second camera, and is not blocked by a screen.
- the above-mentioned camera specifications may include the external dimensions of the camera, the focal length of the camera, the angle of view, the aperture, and the like.
- the images collected by the second camera and the third camera are images collected for the same shooting object, and the same shooting object may be a still person or landscape.
- the image processing apparatus repeatedly performs the process of obtaining samples for N times, and obtains a target sample set including N target samples, wherein each target sample includes two shooting positions, shooting angles, shooting objects and shooting objects.
- each target sample includes two shooting positions, shooting angles, shooting objects and shooting objects.
- the only difference is the camera.
- the shooting conditions can be different from sample to sample.
- each target sample in the target sample set is a sample collected under different shooting conditions.
- the shooting conditions include at least one of the following differences: the shooting object, the shooting background, the environmental parameters of the shooting environment, the shooting equipment shooting parameters.
- the preset model is trained by taking the images captured by the off-screen camera (ie the second camera) and the normal camera (ie the third camera) at the same position and at the same shooting angle as the samples in the target sample set. Afterwards, the trained image processing model is used to process the first image collected by the first camera to obtain a second image with higher image quality, which improves the image quality of photos taken by the under-screen camera.
- the image processing apparatus before using the above-mentioned image processing model to process the image captured by the first camera, the image processing apparatus needs to train the preset model to obtain the above-mentioned image processing model.
- the image processing method provided in this embodiment of the present application may further include the following steps 203 and 204:
- Step 203 The image processing apparatus acquires the target sample set.
- the above target sample set includes N target samples, and each target sample includes two corresponding images.
- Step 204 The image processing apparatus uses the target sample set to train the preset model to obtain an image processing model.
- the above target samples include: a third image and a fourth image; the third image is an image collected by the second camera, and the fourth image is an image collected by the third camera; the third image and the fourth image are the second camera and the fourth image.
- the above-mentioned preset model is a deep learning model with an image processing function, and after the preset model is trained, the above-mentioned image processing model is obtained.
- the image processing device can process the image captured by the under-screen camera, thereby obtaining an image with higher image quality.
- each target sample in the above target sample set is a sample collected by the image processing device under different shooting conditions.
- the image processing device can acquire different samples under the same shooting conditions, they can also be used as target samples in the target training set. For example, a sample of images captured on different subjects under the same shooting conditions.
- the number of repeated samples in the target sample set can be reduced, and the training efficiency of the training model can be improved.
- each target sample in the above target sample set includes two images, and the two images are respectively an image collected by the second camera and an image collected by the third camera.
- the image processing apparatus Before using the target sample set to train the preset model, the image processing apparatus also needs to process each target sample in the target sample set.
- each target sample in the target sample set includes an image collected by a second camera and an image collected by a third camera.
- the image processing method provided in this embodiment of the present application may further include the following step 204a:
- step 204a the image processing apparatus uses a grayscale-based image matching algorithm to match the third image and the fourth image to obtain the matched third image and the fourth image.
- the pixels of the matched third image correspond to the pixels of the fourth image.
- the above-mentioned grayscale-based image matching algorithm may include any one of the following: mean absolute difference algorithm (mean absolute differences, mad), absolute error sum algorithm (sum of absolute differences, sad), error sum of squares algorithm (sum of squared differences, ssd), mean square sum of square differences (msd), normalized cross correlation (ncc), sequential similarity detection algorithm (sequential similiarity detection algorithm, ssda), Hada Hadamard matrix transformation algorithm (sum of absolute transformed difference, satd).
- t and f are the image collected by the second camera and the image collected by the third camera respectively; J and K are the height and width of the image in the matching template used for image matching, respectively; R(x, y) is the For the cross-correlation matrix obtained by operation, taking the xm and ym values when R is the maximum value, the image f(xm+j, ym+k) matching t can be obtained.
- step 204 may include the following step 204b:
- Step 204b The image processing apparatus uses the target sample set matched by the grayscale-based image matching algorithm to train the above-mentioned preset model.
- image matching is performed on the samples in the sample set, so that the two images in each sample achieve pixel-level matching, thereby meeting the image requirements of the preset model during the training process , so that the image quality of the image obtained by the trained image processing model after processing the image collected by the under-screen camera is higher.
- a range of 6-8 pixels can be reserved at the edge of the image, so that the selection is as large as possible.
- the image range is guaranteed to match the success rate.
- step 204a may further include the following step 204a1 or step 204a2:
- Step 204a1 The image processing apparatus matches the image of the preset area in the third image with the fourth image to obtain the matched third image and the fourth image.
- Step 204a2 The image processing apparatus matches the image of the preset area in the fourth image with the third image to obtain the matched third image and the fourth image.
- the size of the preset area is: on the basis of the image size of the third image or the fourth image, the image size after the edge is reduced by a preset number of pixels.
- the third image may be used as the basis or the fourth image may be used as the basis.
- a matching template needs to be used, and the matching template is the image of the above-mentioned preset area, and the height and width of the matching template are J and K in the above formula 1, respectively.
- the purpose of using a matching template with a larger range is to improve the matching degree of the third image and the fourth image, so that the third image and the fourth image can achieve a pixel-level matching degree. Furthermore, the two images in each target sample in the target sample set can meet the corresponding requirements at the pixel level.
- the images in each sample may be divided, and then one sample may be divided into multiple samples. sample.
- step 203 may include the following steps 203a1 and 203a2:
- Step 203a1 The image processing apparatus divides the third image collected by the second camera into M third sub-images, and divides the fourth image collected by the third camera into M fourth sub-images.
- M third sub-images correspond to the M fourth sub-images one-to-one, and one third sub-image corresponds to one fourth sub-image.
- Step 203a2 the image processing device uses the target third sub-image in the above-mentioned M third sub-images and the fourth sub-image corresponding to the target third sub-image in the M fourth sub-images as a sample in the target sample set , get the target sample set.
- the positions and numbers of the divisions are the same, so that each third sub-image in the divided M third sub-images can be There are corresponding images in the M fourth sub-images.
- FIG. 3 there are two images (image 31 and image 32) contained in the sample.
- Divide image 31 into 4 images (images a1, a2, a3, and a4), and divide image 32 into 4 images (images b1, b2, b3, and b4), and map the divided samples one-to-one (a1 Corresponding b1, a2 corresponds to b2, a3 corresponds to b3, a4 corresponds to b4), and each two corresponding images are used as new samples (for example, a1 and b1 can be used as a new sample).
- the segmentation position 31a of the image 31 and the segmentation position 32a of the image 32 are the same (that is, the segmentation point and the relative position of the object in the image are the same) .
- the target sample set containing N samples can be expanded to a sample set containing N*4 samples.
- the capacity of the samples can be greatly expanded, and the requirements for computer configuration in the subsequent training process can be reduced.
- the images collected by the under-screen camera and the normal camera at the same position and at the same shooting angle are taken as the samples in the sample set, and then the shooting conditions are changed or the shooting objects are replaced multiple times. , to obtain a sample set containing N samples.
- pixel-level matching can be performed on the two images in each sample, and the preset model can be trained using the matched sample set.
- the trained image processing model is used to process the images collected by the off-screen camera to obtain images with higher image quality, which improves the image quality of the photos taken by the off-screen camera.
- the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method.
- the image processing apparatus provided by the embodiments of the present application is described by taking an image processing apparatus executing an image processing method as an example.
- an embodiment of the present application provides a shooting support for shooting.
- the shooting support includes: a support 5 , a slide rail 3 connected to the support 5 , and a first tilt stage disposed on the slide rail 3 1 and the second pitch stage 2.
- the first tilting stage 1 is used for supporting the first camera
- the second tilting stage 2 is used for supporting the second camera.
- the first camera is used for collecting the first target image
- the second camera is used for collecting the second target image.
- the first target image is the third image in the above-mentioned embodiment of the image processing method
- the second target image is the fourth image in the above-mentioned embodiment of the image processing method.
- the first target image and the second target image are: the first camera and the second camera adopt the same shooting angle, the same shooting environment, the same shooting parameters, and the images collected from the same shooting object at the same position.
- the above-mentioned first target image and second target image are a sample in a target sample set, and the target sample set is used for training a preset model.
- the first camera and the second camera described above are different from the first camera and the second camera involved in the image processing method shown in FIG. 2 .
- the first camera may be the same as the second camera involved in the image processing method shown in FIG. 2
- the second camera may be the same as the third camera involved in the image processing method shown in FIG. 2 .
- the above-mentioned shooting stand includes two control modes: manual control and automatic control.
- the user can manually or automatically adjust the rotation of the first pitch stage 1 and the second pitch stage 2 in the x-y plane to calibrate the pitch angles of the above two pitch stages.
- the first tilt stage 1 can be adjusted to the target angle and moved to the target position, and then the first camera can be controlled to capture images. After that, move the first pitch stage 1 away, adjust the second pitch stage to the target angle and move to the target position, and then control the second camera to capture images.
- the above-mentioned shooting support further includes: a control module, which is used to control the first tilting stage to move to the target position, adjust the first tilting stage to the target angle, and control the first camera to collect the first target image; the The control module is further configured to control the second pitching stage to move to the target position, adjust the second pitching stage to the target angle, and control the second camera to collect the second target image.
- the above-mentioned shooting support further includes: a conversion interface 4 between the slide rail 3 and the support 5 , a power supply system and a programmable microcontroller 6 .
- the user can realize the precise control of the above-mentioned first pitch stage 1 and second pitch stage 2 through the power supply system and the programmable single chip 6 .
- the user can write codes to enable the first and second tilting stages 1 and 2 to electrically control the cameras mounted on them in the z-direction of the slide rail 3 by means of the power supply system 6 to achieve precise displacement.
- the above-mentioned first camera may be an off-screen camera installed on the first electronic device
- the above-mentioned second camera may be installed on the second electronic device
- the relative position of the first camera relative to the first electronic device is the same as that of the first camera. position.
- the installation position of the under-screen camera 11 (ie the above-mentioned first electronic device) on the electronic device 10a is the same as that of the camera 12 (ie the above-mentioned second electronic device) on the electronic device 10b
- the installation location is the same on the . Only in this way can the first camera and the second camera capture images at the same position using the same shooting angle.
- each device can be connected by means of a conversion interface
- the bracket 5 and the guide rail 3 are connected by means of a conversion interface 4
- the first electronic device and the second electronic device can be connected through the first pitching Fixtures and dry plate clamps provided on stage 1 and the second pitching stage 2 are fixed to increase their stability.
- the straight line where the lower edges of the two electronic devices are located is parallel to the direction of the guide rail 3 .
- the microcontroller 6 can be programmed according to the parameters of the shooting stand, so that it can control the first pitch stage on the slide rail 3.
- a tilting table 1 and a second tilting table 2 are precisely translated, the bracket 5 is fixed and the pan-tilt is adjusted to make the tilt of the pan-tilt as 0 as possible (a slight tilt does not affect the accuracy of data acquisition), and the conversion interface 4 is used to connect the The power supply system and the guide rail 3 of the single-chip microcomputer 6 are installed on the bracket 5 .
- the first electronic device and the second electronic device are respectively fixed on the first elevation platform 1 and the second elevation platform 2 .
- the shooting stand needs to be adjusted and calibrated first, so that the electronic devices can be successively moved to the same position by means of the slide rail 3 to maintain the same posture.
- image acquisition software the image captured by the first camera is obtained and saved in the computer.
- the second pitch stage 2 is controlled on the slide rail 3 by the power supply and the single-chip microcomputer 6 Translate, move the second electronic device to the same shooting position, adjust the second tilt stage 2, so that the second camera can capture the same image as the image captured by the first camera, and obtain the second image with the help of the above-mentioned image acquisition software.
- the image captured by the camera can complete the calibration work.
- the shooting bracket provided by the embodiment of the present application can accurately control the pitch angles and positions of the first pitch stage and the second pitch stage, so as to realize the images that the first camera and the second camera can collect at the same position and at the same shooting angle
- pixel-level correspondence can be implemented for the images captured by the first camera and the second camera.
- FIG. 5 is a schematic diagram of a possible structure for implementing an image processing apparatus provided by an embodiment of the present application.
- the image processing apparatus 600 includes: an acquisition module 601 and a processing module 602;
- the camera collects the first image, and the first camera is an off-screen camera;
- the processing module 602 is used to process the first image collected by the collection module 601 using an image processing model to obtain a second image, and the image quality of the second image is higher than that of the first image
- the image processing model is obtained after training the preset model with the target sample set; each target sample of the target sample set includes two images; the two images in each target sample are the second camera respectively
- the same shooting angle, the same shooting environment, the same shooting parameters, and the image collected for the same shooting object are used in the same position as the third camera, and the image quality of the image collected by the second camera is lower than that of the image collected by the third camera.
- Image quality; the second camera is an off-screen camera.
- the preset model is trained by taking the images captured by the off-screen camera (ie the second camera) and the normal camera (ie the third camera) at the same position and at the same shooting angle as the samples in the target sample set. Afterwards, the trained image processing model is used to process the first image collected by the first camera to obtain a second image with higher image quality, which improves the image quality of photos taken by the under-screen camera.
- the image processing apparatus 600 further includes: an acquisition module 603 and a training module 604; the acquisition module 603 is used to acquire the target sample set; the training module 604 is used to train the preset model by using the target sample set acquired by the acquisition module 603, An image processing model is obtained; wherein, the target sample includes: a third image and a fourth image; the third image is an image collected by the second camera, and the fourth image is an image collected by the third camera; the third image and the fourth image are the third image and the fourth image. Images obtained by the second camera and the third camera at the same position and using the same shooting parameters for the same target respectively.
- the image processing device can process the image captured by the under-screen camera, thereby obtaining an image with higher image quality.
- each target sample in the target sample set is a sample collected under different shooting conditions; wherein, the shooting conditions include at least one of the following: shooting object, shooting background, environmental parameters of shooting environment, shooting parameters of shooting equipment .
- the number of repeated samples in the target sample set can be reduced, and the training efficiency of the training model can be improved.
- the image processing apparatus 600 further includes: a matching module 605; the matching module 605 is configured to use a grayscale-based image matching algorithm to match the third image and the fourth image to obtain the matched third image and the fourth image.
- a matching module 605 is configured to use a grayscale-based image matching algorithm to match the third image and the fourth image to obtain the matched third image and the fourth image.
- Four images wherein, the pixels of the matched third image and the fourth image correspond.
- image matching is performed on the samples in the sample set, so that the two images in each sample achieve pixel-level matching, thereby meeting the image requirements of the preset model during the training process , so that the image quality of the image obtained by the trained image processing model after processing the image collected by the under-screen camera is higher.
- the matching module 605 is specifically used to match the image of the preset area in the third image with the fourth image to obtain the matched third image and the fourth image; or, the matching module 605 is specifically used to match The image of the preset area in the fourth image is matched with the third image to obtain the matched third image and the fourth image; wherein, the size of the preset area is: based on the image size of the third image or the fourth image , the size of the image after the edges are reduced by a preset number of pixels.
- the acquisition module 603 is specifically configured to divide the third image into M third sub-images, and divide the fourth image into M fourth sub-images; the M third sub-images and the M fourth sub-images The images are in one-to-one correspondence; the acquisition module 603 is also specifically configured to use the target third sub-image in the M third sub-images and the fourth sub-image corresponding to the target third sub-image in the M fourth sub-images as the target A sample in the sample set to get the target sample set.
- the capacity of the sample can be greatly expanded, and the requirements for computer configuration in the subsequent training process can be reduced.
- the image processing apparatus in this embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal.
- the apparatus may be a mobile electronic device or a non-mobile electronic device.
- the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
- UMPC ultra-mobile personal computer
- netbook or a personal digital assistant
- non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
- Network Attached Storage NAS
- personal computer personal computer, PC
- television television
- teller machine or self-service machine etc.
- the image processing apparatus in this embodiment of the present application may be an apparatus having an operating system.
- the operating system may be an Android (Android) operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
- the image processing apparatus provided in this embodiment of the present application can implement each process implemented by the image processing apparatus in the method embodiments of FIG. 2 and FIG. 3 , and to avoid repetition, details are not described here.
- the images collected by the under-screen camera and the normal camera at the same position and at the same shooting angle are taken as the samples in the sample set, and then the shooting conditions are changed or the shooting objects are replaced multiple times. , to obtain a sample set containing N samples.
- pixel-level matching can be performed on the two images in each sample, and the preset model can be trained using the matched sample set.
- the trained image processing model is used to process the image collected by the under-screen camera, to obtain an image with higher image quality, which improves the image quality of the photo taken by the under-screen camera.
- an embodiment of the present application further provides an electronic device, including a processor 110, a memory 109, a program or instruction stored in the memory 109 and executable on the processor 110, the program or instruction being processed by the processor
- an electronic device including a processor 110, a memory 109, a program or instruction stored in the memory 109 and executable on the processor 110, the program or instruction being processed by the processor
- the electronic devices in the embodiments of the present application include the aforementioned mobile electronic devices and non-mobile electronic devices.
- FIG. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present application.
- the electronic device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110, etc. part.
- the electronic device 100 may also include a power source (such as a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power management through the power management system. consumption management and other functions.
- a power source such as a battery
- the structure of the electronic device shown in FIG. 6 does not constitute a limitation on the electronic device, and the electronic device may include more or less components than those shown in the figure, or combine some components, or arrange different components, which will not be repeated here. .
- the input unit 104 is used to collect the first image through the first camera, and the first camera is an off-screen camera; the processor 110 is used to process the first image collected by the input unit 104 by using an image processing model to obtain the second image, The image quality of the second image is higher than that of the first image; wherein, the image processing model is obtained after training the preset model with the target sample set; each target sample of the target sample set includes two images; The two images in the sample are the images collected by the second camera and the third camera at the same position using the same shooting angle, the same shooting environment, the same shooting parameters, and the same shooting object, and the images collected by the second camera are the same. The image quality is lower than that of the image collected by the third camera; the second camera is an off-screen camera.
- the processor 110 is configured to acquire a target sample set; the processor 110 is configured to train a preset model by using the target sample set to obtain an image processing model; wherein the target sample includes: a third image and a fourth image; The three images are captured by the second camera, and the fourth image is captured by the third camera; the third and fourth images are captured by the second camera and the third camera at the same location using the same shooting parameters for the same target, respectively image.
- the processor 110 is configured to use a grayscale-based image matching algorithm to match the third image and the fourth image to obtain the matched third image and the fourth image; wherein, the matched third image corresponds to the pixel of the fourth image.
- the processor 110 is specifically configured to match the image of the preset area in the third image with the fourth image to obtain the matched third image and the fourth image; or, the processor 110 is specifically configured to match The image of the preset area in the fourth image is matched with the third image to obtain the matched third image and the fourth image; wherein, the size of the preset area is: based on the image size of the third image or the fourth image , the size of the image after the edges are reduced by a preset number of pixels.
- the processor 110 is specifically configured to divide the third image into M third sub-images, and divide the fourth image into M fourth sub-images; the M third sub-images and the M fourth sub-images The images are in one-to-one correspondence; the processor 110 is specifically further configured to use the target third sub-image in the M third sub-images and the fourth sub-image corresponding to the target third sub-image in the M fourth sub-images as the target A sample in the sample set to get the target sample set.
- the images collected by the under-screen camera and the normal camera at the same position and at the same shooting angle are used as samples in the sample set, and then, by changing the shooting conditions or changing the shooting object multiple times, Get a sample set containing N samples.
- pixel-level matching can be performed on the two images in each sample, and the preset model can be trained using the matched sample set.
- the trained image processing model is used to process the images collected by the under-screen camera to obtain images with higher image quality, which improves the image quality of photos taken by the under-screen camera.
- the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042. Such as camera) to obtain still pictures or video image data for processing.
- the display unit 106 may include a display panel 1061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
- the user input unit 107 includes a touch panel 1071 and other input devices 1072 .
- the touch panel 1071 is also called a touch screen.
- the touch panel 1071 may include two parts, a touch detection device and a touch controller.
- Other input devices 1072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
- Memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems.
- the processor 110 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, and the like, and the modem processor mainly processes wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 110 .
- Embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium.
- a program or an instruction is stored on the readable storage medium.
- the processor is the processor in the electronic device described in the foregoing embodiments.
- the readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
- An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
- the chip includes a processor and a communication interface
- the communication interface is coupled to the processor
- the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
- the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
- the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
- the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make an electronic device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the various embodiments of the present application.
- a storage medium such as ROM/RAM, magnetic disk, CD-ROM
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Television Signal Processing For Recording (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (17)
- 一种图像处理方法,所述方法包括:通过第一摄像头采集第一图像,所述第一摄像头为屏下摄像头;使用图像处理模型处理所述第一图像,得到第二图像,所述第二图像的图像质量高于所述第一图像的图像质量;其中,所述图像处理模型为采用目标样本集对预设模型训练后得到的;所述目标样本集的每个目标样本包括两个图像;所述每个目标样本中的两个图像分别为第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像,且所述第二摄像头采集的图像的图像质量低于所述第三摄像头采集的图像的图像质量;所述第二摄像头为屏下摄像头。
- 根据权利要求1所述的方法,其中,所述使用图像处理模型处理所述第一图像之前,所述方法还包括:获取所述目标样本集;采用所述目标样本集训练所述预设模型,得到所述图像处理模型;其中,所述目标样本包括:第三图像和第四图像;所述第三图像为所述第二摄像头采集的图像,所述第四图像为所述第三摄像头采集的图像。
- 根据权利要求2所述的方法,其中,所述目标样本集中的每个目标样本均为在不同拍摄条件下采集的样本,其中,所述拍摄条件包括以下至少一项:拍摄对象,拍摄背景,拍摄环境的环境参数,拍摄装备的拍摄参数。
- 根据权利要求2所述的方法,其中,所述采用所述目标样本集训练所述预设模型之前,所述方法还包括:采用基于灰度的图像匹配算法,对所述第三图像和所述第四图像进行匹配,得到匹配后的第三图像和第四图像,所述匹配后的第三图像和第四图像的像素对应。
- 根据权利要求4所述的方法,其中,所述采用基于灰度的图像匹配算法,对所述第三图像和所述第四图像进行匹配,得到匹配后的第三图像和第四图像,包括:将所述第三图像中预设区域的图像与所述第四图像进行匹配,得到匹配后的第三图像和第四图像;或者,将所述第四图像中预设区域的图像与所述第三图像进行匹配,得到匹配后的第三图像和第四图像;其中,所述预设区域的大小为:在所述第三图像或所述第四图像的图像大小的基础上,边缘减少预设数量像素之后的图像大小。
- 根据权利要求2至5中任一项所述的方法,其中,所述获取所述目标样本集,包括:将所述第三图像分割为M个第三子图像,并将所述第四图像分割为M个第四子图像;所述M个第三子图像与所述M个第四子图像一一对应;将所述M个第三子图像中的目标第三子图像,以及所述M个第四子图像中与所述目标第三子图像对应的第四子图像作为所述目标样本集中的一个样本,得到所述目标样本集。
- 一种图像处理装置,所述装置包括:采集模块和处理模块;所述采集模块,用于通过第一摄像头采集第一图像,所述第一摄像头为屏下摄像头;所述处理模块,用于使用图像处理模型处理所述采集模块采集的第一图像,得到第二图像,所述第二图像的图像质量高于所述第一图像的图像质量;其中,所述图像处理模型为采用目标样本集对预设模型训练后得到的;所述目标样本集的每个目标样本包括两个图像;所述每个目标样本中的两个图像分别为第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像,且所述第二摄像头采集的图像的图像质量低于所述第三摄像头采集的图像的图像质量;所述第二摄像头为屏下摄像头。
- 根据权利要求7所述的装置,其中,所述图像处理装置还包括获取模块和训练模块;所述获取模块,用于所述处理模块使用图像处理模型处理所述第一图像之前,获取所述目标样本集;所述训练模块,用于采用所述目标样本集训练所述预设模型,得到所述图像处理 模型;其中,所述目标样本包括:第三图像和第四图像;所述第三图像为所述第二摄像头采集的图像,所述第四图像为所述第三摄像头采集的图像。
- 根据权利要求8所述的装置,其中,所述目标样本集中的每个目标样本均为在不同拍摄条件下采集的样本,其中,所述拍摄条件包括以下至少一项:拍摄对象,拍摄背景,拍摄环境的环境参数,拍摄装备的拍摄参数。
- 根据权利要求8所述的装置,其中,所述图像处理装置还包括匹配模块;所述匹配模块,用于所述训练模块采用所述目标样本集训练所述预设模型之前,采用基于灰度的图像匹配算法,对所述第三图像和所述第四图像进行匹配,得到匹配后的第三图像和第四图像,所述匹配后的第三图像和第四图像的像素对应。
- 根据权利要求10所述的装置,其中,所述匹配模块,具体用于将所述第三图像中预设区域的图像与所述第四图像进行匹配,得到匹配后的第三图像和第四图像;或者,将所述第四图像中预设区域的图像与所述第三图像进行匹配,得到匹配后的第三图像和第四图像;其中,所述预设区域的大小为:在所述第三图像或所述第四图像的图像大小的基础上,边缘减少预设数量像素之后的图像大小。
- 根据权利要求8-11中任一项所述的装置,其中,所述获取模块,具体用于将所述第三图像分割为M个第三子图像,并将所述第四图像分割为M个第四子图像;所述M个第三子图像与所述M个第四子图像一一对应;将所述M个第三子图像中的目标第三子图像,以及所述M个第四子图像中与所述目标第三子图像对应的第四子图像作为所述目标样本集中的一个样本,得到所述目标样本集。
- 一种拍摄支架,包括:支架、与所述支架连接的滑轨,以及设置在所述滑轨上的第一俯仰台和第二俯仰台,所述第一俯仰台用于支撑第一摄像头,所述第二俯仰台用于支撑第二摄像头;所述第一摄像头用于采集第一目标图像;所述第二摄像头用于采集第二目标图像;其中,所述第一目标图像和所述第二目标图像为:所述第一摄像头和第二摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像;所述第一目标图像和所述第二目标图像为目标样本集中的一个样本, 所述目标样本集用于训练预设模型。
- 一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至6中任一项所述的图像处理方法的步骤。
- 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至6中任一项所述的图像处理方法的步骤。
- 一种计算机程序产品,所述程序产品被至少一个处理器执行以实现如权利要求1至6中任一项所述的图像处理方法。
- 一种用户设备UE,包括所述UE被配置成用于执行如权利要求1至6中任一项所述的图像处理方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110097229.1 | 2021-01-25 | ||
CN202110097229.1A CN112887598A (zh) | 2021-01-25 | 2021-01-25 | 图像处理方法、装置、拍摄支架、电子设备及可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022156683A1 true WO2022156683A1 (zh) | 2022-07-28 |
Family
ID=76050941
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/072577 WO2022156683A1 (zh) | 2021-01-25 | 2022-01-18 | 图像处理方法、装置、拍摄支架、电子设备及可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112887598A (zh) |
WO (1) | WO2022156683A1 (zh) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112887598A (zh) * | 2021-01-25 | 2021-06-01 | 维沃移动通信有限公司 | 图像处理方法、装置、拍摄支架、电子设备及可读存储介质 |
CN116416656B (zh) * | 2021-12-29 | 2024-10-15 | 荣耀终端有限公司 | 基于屏下图像的图像处理方法、装置及存储介质 |
CN115580690B (zh) * | 2022-01-24 | 2023-10-20 | 荣耀终端有限公司 | 图像处理的方法和电子设备 |
CN115565213B (zh) * | 2022-01-28 | 2023-10-27 | 荣耀终端有限公司 | 图像处理方法及装置 |
CN114785908A (zh) * | 2022-04-20 | 2022-07-22 | Oppo广东移动通信有限公司 | 电子设备、电子设备的图像获取方法及计算机可读存储介质 |
CN115100054A (zh) * | 2022-06-16 | 2022-09-23 | 昆山国显光电有限公司 | 一种显示装置及屏下拍照处理方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9549101B1 (en) * | 2015-09-01 | 2017-01-17 | International Business Machines Corporation | Image capture enhancement using dynamic control image |
CN111107269A (zh) * | 2019-12-31 | 2020-05-05 | 维沃移动通信有限公司 | 拍摄方法、电子设备及存储介质 |
CN111951192A (zh) * | 2020-08-18 | 2020-11-17 | 义乌清越光电科技有限公司 | 一种拍摄图像的处理方法及拍摄设备 |
CN112887598A (zh) * | 2021-01-25 | 2021-06-01 | 维沃移动通信有限公司 | 图像处理方法、装置、拍摄支架、电子设备及可读存储介质 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108924420B (zh) * | 2018-07-10 | 2020-08-04 | Oppo广东移动通信有限公司 | 图像拍摄方法、装置、介质、电子设备及模型训练方法 |
CN110880003B (zh) * | 2019-10-12 | 2023-01-17 | 中国第一汽车股份有限公司 | 一种图像匹配方法、装置、存储介质及汽车 |
CN111311523B (zh) * | 2020-03-26 | 2023-09-05 | 北京迈格威科技有限公司 | 图像处理方法、装置、系统和电子设备 |
-
2021
- 2021-01-25 CN CN202110097229.1A patent/CN112887598A/zh active Pending
-
2022
- 2022-01-18 WO PCT/CN2022/072577 patent/WO2022156683A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9549101B1 (en) * | 2015-09-01 | 2017-01-17 | International Business Machines Corporation | Image capture enhancement using dynamic control image |
CN111107269A (zh) * | 2019-12-31 | 2020-05-05 | 维沃移动通信有限公司 | 拍摄方法、电子设备及存储介质 |
CN111951192A (zh) * | 2020-08-18 | 2020-11-17 | 义乌清越光电科技有限公司 | 一种拍摄图像的处理方法及拍摄设备 |
CN112887598A (zh) * | 2021-01-25 | 2021-06-01 | 维沃移动通信有限公司 | 图像处理方法、装置、拍摄支架、电子设备及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112887598A (zh) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022156683A1 (zh) | 图像处理方法、装置、拍摄支架、电子设备及可读存储介质 | |
WO2022166944A1 (zh) | 拍照方法、装置、电子设备及介质 | |
WO2017107629A1 (zh) | 移动终端、数据传输系统及移动终端拍摄方法 | |
WO2015003604A1 (zh) | 一种图像处理方法、装置及终端 | |
CN103297696A (zh) | 拍摄方法、装置和终端 | |
WO2022001897A1 (zh) | 图像拍摄方法及电子设备 | |
CN113840070B (zh) | 拍摄方法、装置、电子设备及介质 | |
WO2022161340A1 (zh) | 图像显示方法、装置及电子设备 | |
WO2022237839A1 (zh) | 拍摄方法、装置及电子设备 | |
US10009545B2 (en) | Image processing apparatus and method of operating the same | |
CN109218609A (zh) | 图像构图方法及装置 | |
CN103780839A (zh) | 一种拍照方法及终端 | |
US8983227B2 (en) | Perspective correction using a reflection | |
CN114640833A (zh) | 投影画面调整方法、装置、电子设备和存储介质 | |
TWI599224B (zh) | 拍照系統、裝置及方法 | |
CN105635568A (zh) | 一种移动终端中的图像处理方法和移动终端 | |
CN104902187A (zh) | 一种移动终端自拍控制方法以及控制系统 | |
US9992411B2 (en) | Electronic device having a photographing function and photographing method thereof | |
CN104168407A (zh) | 全景影像的拍摄方法 | |
WO2023030223A1 (zh) | 拍摄方法、装置及电子设备 | |
KR20130081439A (ko) | 휴대 단말기에서 카메라 뷰 영역을 표시하는 장치 및 방법 | |
TW201513661A (zh) | 攝影裝置及其調整系統、調整方法 | |
CN106488128B (zh) | 一种自动拍照的方法及装置 | |
CN114241127A (zh) | 全景图像生成方法、装置、电子设备和介质 | |
CN105100557B (zh) | 便携式电子装置以及图像提取方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22742162 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22742162 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.01.2024) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22742162 Country of ref document: EP Kind code of ref document: A1 |