CN115334234A - Method and device for supplementing image information by taking pictures in dark environment - Google Patents

Method and device for supplementing image information by taking pictures in dark environment Download PDF

Info

Publication number
CN115334234A
CN115334234A CN202210766024.2A CN202210766024A CN115334234A CN 115334234 A CN115334234 A CN 115334234A CN 202210766024 A CN202210766024 A CN 202210766024A CN 115334234 A CN115334234 A CN 115334234A
Authority
CN
China
Prior art keywords
image
information
scene
image information
useful
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210766024.2A
Other languages
Chinese (zh)
Other versions
CN115334234B (en
Inventor
王晓雷
王晓博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ontim Technology Co Ltd
Original Assignee
Beijing Ontim Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ontim Technology Co Ltd filed Critical Beijing Ontim Technology Co Ltd
Priority to CN202210766024.2A priority Critical patent/CN115334234B/en
Priority to CN202410320303.5A priority patent/CN118264919A/en
Publication of CN115334234A publication Critical patent/CN115334234A/en
Application granted granted Critical
Publication of CN115334234B publication Critical patent/CN115334234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method and a device for taking a photo in a dark light environment to supplement image information. The method for taking the picture in the dark environment to supplement the image information comprises the following steps: acquiring an original RAW image shot by a camera; preprocessing the RAW image, and identifying useful live-action information in the RAW image; separating and enhancing the useful live-action information; generating background image information according to the useful live-action information; and synthesizing the enhanced useful real scene information and the background image information to obtain a night scene image. The method for supplementing the image information by taking the photo in the dark light environment realizes color texture supplementation of the night scene image in real time, enables a user to take an effect similar to a daytime highlight scene at night, and increases user experience.

Description

Method and device for supplementing image information by taking pictures in dark light environment
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for supplementing image information by taking a picture in a dark environment.
Background
The mobile terminal has entered the 5G era, innovations to the terminal are constantly explored, and are also constantly updated and iterated, and along with the development of the technology, the mobile phone has increasingly strong processing performance, the photographing effect is increasingly good, and the problem of terminal dark light photographing is a problem to be solved urgently. The processing performance of a built-in ISP of the current mobile phone is limited, the built-in ISP of the current mobile phone is seriously monopolized by a terminal platform, the shooting style of the differentiated design also almost becomes the standard configuration of the mobile phone, and the mobile phone platform has the problems of high noise, poor definition, dark color and the like when shooting a photo in dark light, so that the user experience is poor.
The existing method for controlling the dark light environment to take a picture by the built-in ISP of the mobile phone is to carry out effect optimization compensation by taking a RAW image, when the lens light-entering amount is small in a dark environment, the RAW data output by the photosensitive sensor is doped with noise, and when the picture effect is restored by the traditional ISP, the noise is forcibly removed through a specific filter, so that partial image information is lost, and the picture taking effect is influenced.
Disclosure of Invention
Based on this, the invention aims to provide a method and a device for supplementing image information by taking a photo in a dark light environment, so that color and texture supplementation of a night scene image is realized in real time, a user can take an effect similar to that in a daytime highlight scene at night, and the user experience is increased.
In a first aspect, the present invention provides a method for taking a picture in a dark environment to supplement image information, comprising the steps of:
acquiring an original RAW image shot by a camera;
preprocessing the RAW image, and identifying useful real scene information in the RAW image;
separating and enhancing the useful live-action information;
generating background image information according to the useful live-action information;
and synthesizing the enhanced useful real scene information and the background image information to obtain a night scene image.
Further, the useful scene information is subjected to separation processing, which comprises the following steps:
and performing edge extraction on the useful live-action information by using an edge detection algorithm to obtain the separated useful live-action information.
Further, the useful live-action information is subjected to enhancement processing, and the enhancement processing comprises the following steps:
and converting the useful live-action information into a daytime highlight scene style by using a style conversion network.
Further, generating background image information according to the useful real scene information, comprising:
obtaining the scene of the image according to the type of the useful live-action information;
and generating background image information corresponding to the scene according to the scene of the image.
Further, generating background image information corresponding to a scene of the image according to the scene, including:
and performing image information supplementation including complementary color and texture supplementation by using the image color texture compensation network trained in the corresponding scene to generate background image information.
Further, generating background image information corresponding to a scene of the image according to the scene, including:
determining a filter with a style corresponding to a background image according to the scene of the image;
and superposing the filter on the original image to generate background image information.
Further, generating background image information corresponding to a scene of the image according to the scene, including:
matching map materials in a material library according to the scene of the image;
and overlapping the image overlapping material on the original image to generate background image information.
In a second aspect, the present invention further provides an apparatus for taking a photo in a dark environment to supplement image information, comprising:
the image acquisition module is used for acquiring an original RAW image shot by the camera;
the useful live-action information identification module is used for preprocessing the RAW image and identifying useful live-action information in the RAW image;
the useful live-action information enhancement module is used for carrying out separation processing and enhancement processing on the useful live-action information;
the background image information generating module is used for generating background image information according to the useful real scene information;
and the synthesis module is used for synthesizing the enhanced useful live-action information and the background image information to obtain a night-action image.
In a third aspect, the present invention provides an electronic device, including:
at least one memory and at least one processor;
the memory for storing one or more programs;
when executed by the at least one processor, cause the at least one processor to perform the steps of a method for taking a photo in a dim environment to supplement image information according to any one of the first aspect of the invention.
In a fourth aspect, the present invention also provides a computer-readable storage medium, characterized in that:
the computer readable storage medium stores a computer program which, when executed by a processor, implements the steps of a method of taking a photo in a dim light environment to supplement image information according to any one of the first aspect of the present invention.
The invention provides a method and a device for supplementing image information by taking a picture in a dark light environment. According to the scheme, the color texture supplement of the night scene image can be realized in real time, so that a user can take an effect similar to that of a daytime highlight scene at night, and the user experience is improved. The night scene photo is converted into a virtual-real combined style photo based on a specific algorithm or an AI model and other methods, and the algorithm method is used for realizing dim light image information compensation and generation of virtual scenes in different dim light scenes by a user. The problem of the user shoot the picture night appear dark fuzzy not clear, the noise is many, and dark one slice of pain point such as regional image is lacked light irradiation is solved. The virtual and real images are combined, the real scene information required by the user can be shot by the skill, and the current scene can be decorated by the virtual image.
For a better understanding and practice, the present invention is described in detail below with reference to the accompanying drawings.
Drawings
FIG. 1 is a schematic flow chart of a method for taking a photo in a dark environment to supplement image information according to the present invention;
fig. 2 is a schematic structural diagram of an apparatus for taking a picture in a dark environment to supplement image information according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that the embodiments described are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without any creative effort belong to the protection scope of the embodiments in the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims. In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not necessarily used to describe a particular order or sequence, nor are they to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
To solve the problems in the background art, an embodiment of the present application provides a method for taking a photo in a dark environment to supplement image information, as shown in fig. 1, the method includes the following steps:
s01: and acquiring an original RAW image shot by a camera.
S02: and preprocessing the RAW image, and identifying useful real scene information in the RAW image.
Useful live-action information: the night-scene photo information is described herein, which is shot in a night-scene photo, and is unclear such as dark, but has image outline, for example, tree information in a photo shot at night, a distant tree has a simple black outline, and is not as clearly visible in the daytime, and the night-scene photo information is referred to herein as tree outline information in the night-scene photo.
In a preferred embodiment, useful live-action information may be identified using a unet neural network.
The neural network is constructed as the letter u in English, and is named unet. It is built on the FCN architecture, first of all, a series of convolutional layers starting from the left input, where there are mainly 5 layers, and the purpose is to extract the features of the picture, where the classical feature extraction network such as vgg or resnet can be used. And then, in the structure on the right side, firstly, the extracted features are up-sampled from the lowest layer, the shape of the up-sampled features is the same as that of the features on the upper layer, then the two features are gathered together, the convolution layer is added for reducing the number of channels, then the reduced features are up-sampled, and the previous operation is repeated. And finally, when the shape of the original image is the same as that of the original image, adding a convolution layer for classifying each pixel point.
S03: and carrying out separation processing and enhancement processing on the useful live-action information.
Preferably, the method comprises the following substeps:
s031: and performing edge extraction on the useful live-action information by using an edge detection algorithm to obtain the separated useful live-action information.
In another example, the images may be separated by performing edge extraction using a certain feature extraction neural network algorithm.
S032: and converting the useful live-action information into a daytime highlight scene style by using a style conversion network.
In a particular embodiment, the images are processed using a CycleGAN network. Two mirror image GAN networks of the CycleGAN network respectively learn to obtain two mappings, the first mapping G (B) converts a real image realB with low illumination and low quality into a generated image FakeA with high quality under normal illumination, the second mapping G (A) converts the real image realA with high quality under normal illumination into a generated image FakeB with low illumination and low quality, and then the FakeB is used as the input of the network and is converted into a middle image with high quality under normal illumination by using the mapping G (B).
S04: and generating background image information according to the useful real scene information.
When no useful image information exists in a part of areas in a dark scene shot by a user, the part of areas need an AI network to freely generate image color and texture information with similar styles in the scene according to useful real scene information in the current scene, and a virtual image generated by the method is supplemented to the area of useful image information in the picture at noon, so that the effect of generating a part of virtual images is realized, and the problem of loss of original image data in the dark environment shooting is finally solved.
Preferably, the method comprises the following substeps:
s041: and obtaining the scene of the image according to the type of the useful real scene information.
S042: and generating background image information corresponding to the scene according to the scene of the image.
In a specific embodiment, the image information can be supplemented by using a picture color texture compensation network trained in a corresponding scene, including complementary color and texture supplementation, to generate background image information.
In other examples, the virtual image compensation may also be generated using conventional algorithms. For example, according to the scene of the image, determining a filter with a style corresponding to the background image; and superposing the filter on the original image to generate background image information.
In other examples, a simple mapping method may be used (e.g., collecting or drawing lantern, leaves, flowers, plants, trees, etc. in advance, storing the images in advance to form a virtual small local original image, and mapping the image into the small local virtual original image to form a virtual-real combined image after the user takes the image and finds a similar scene through a simple scene recognition algorithm)
S05: and synthesizing the enhanced useful live-action information and the background image information to obtain a night-action image.
And finally, forming a virtual-real combination image by the images generated in the two steps through image splicing, superposition and other synthesis algorithms, and converting the night scene into a virtual-real combination image with super-visual experience.
The method for supplementing the image information by taking the photo in the dark light environment provided by the invention can realize color texture supplementation of the night scene image in real time, so that a user can take an effect similar to a daytime highlight scene at night, and the user experience is increased.
The embodiment of the present application further provides an apparatus for taking a picture in a dark light environment to supplement image information, as shown in fig. 2, the apparatus 400 for taking a picture in a dark light environment to supplement image information includes:
an image obtaining module 401, configured to obtain an original RAW image captured by a camera;
a useful live-action information identification module 402, configured to preprocess the RAW image and identify useful live-action information in the RAW image;
a useful live-action information enhancement module 403, configured to perform separation processing and enhancement processing on the useful live-action information;
a background image information generating module 404, configured to generate background image information according to the useful real-scene information;
and a synthesizing module 405, configured to synthesize the enhanced useful real-scene information and the background image information to obtain a night-scene image.
Preferably, the useful live-action information enhancement module comprises:
and the separation unit is used for carrying out edge extraction on the useful live-action information by using an edge detection algorithm to obtain the separated useful live-action information.
Preferably, the useful live-action information enhancement module comprises:
and the enhancement unit is used for converting the useful live-action information into a daytime highlight scene style by using a style conversion network.
Preferably, the background image information generating module includes:
the scene recognition unit is used for obtaining the scene of the image according to the type of the useful real scene information;
and the background image information generating unit is used for generating background image information corresponding to the scene according to the scene of the image.
Preferably, the background image information generating unit includes:
and the image information supplementing element is used for supplementing image information by using the picture color texture compensation network trained in the corresponding scene, including complementary color and texture supplementation, and generating background image information.
Preferably, the background image information generating unit includes:
the filter style determining element is used for determining a filter with a style corresponding to a background image according to the scene of the image;
and a filter superimposing element for superimposing the filter on the original image to generate background image information.
Preferably, the background image information generating unit includes:
the mapping material determining element is used for matching mapping materials in a material library according to the scene of the image;
and a map material superimposing unit for superimposing the map material on the original image to generate background image information.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts shown as units may or may not be physical units. It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides an electronic device, including:
at least one memory and at least one processor;
the memory for storing one or more programs;
when executed by the at least one processor, cause the at least one processor to perform the steps of a method for taking a photo in a dim light environment to supplement image information as described above.
For the apparatus embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described device embodiments are merely illustrative, wherein the components described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the disclosure. One of ordinary skill in the art can understand and implement it without inventive effort.
Embodiments of the present application also provide a computer-readable storage medium,
the computer readable storage medium stores a computer program which, when executed by a processor, implements the steps of a method of taking a photo in a dim light environment to supplement image information as previously described.
Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of random access memory (rmam), read only memory (ro M), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The invention provides a method and a device for supplementing image information by taking a picture in a dark light environment. By the scheme, color and texture supplementation of the night scene image can be realized in real time, so that a user can take an effect similar to that of a daytime highlight scene at night, and user experience is improved. The night scene photo is converted into a virtual-real combined style photo based on methods such as a specific algorithm or an AI model, and the algorithm method is used for realizing dark light image information compensation and virtual scene generation of a user in different dark light scenes. The problem of pain point such as the user shoots the picture at night and appears dark fuzzy and unclear, and noise is many, and the regional image of no light irradiation is painted black one is solved. The virtual and real images are combined, the real scene information required by the user can be shot by the skill, and the current scene can be decorated by the virtual image.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (10)

1. A method for taking a picture in a dim light environment to supplement image information, comprising the steps of:
acquiring an original RAW image shot by a camera;
preprocessing the RAW image, and identifying useful real scene information in the RAW image;
separating and enhancing the useful live-action information;
generating background image information according to the useful live-action information;
and synthesizing the enhanced useful real scene information and the background image information to obtain a night scene image.
2. The method of claim 1, wherein the separating the useful real scene information comprises:
and performing edge extraction on the useful live-action information by using an edge detection algorithm to obtain the separated useful live-action information.
3. The method of claim 1, wherein the enhancing process is performed on the useful real scene information, and comprises:
and converting the useful live-action information into a daytime highlight scene style by using a style conversion network.
4. The method of claim 1, wherein generating background image information based on the useful real-world information comprises:
obtaining the scene of the image according to the type of the useful real scene information;
and generating background image information corresponding to the scene according to the scene of the image.
5. The method of claim 4, wherein generating background image information corresponding to a scene of the image according to the scene comprises:
and performing image information supplementation including complementary color and texture supplementation by using the image color texture compensation network trained in the corresponding scene to generate background image information.
6. The method of claim 4, wherein generating background image information corresponding to a scene of the image according to the scene comprises:
determining a filter with a style corresponding to a background image according to the scene of the image;
and superposing the filter on the original image to generate background image information.
7. The method of claim 4, wherein generating background image information corresponding to a scene of the image according to the scene comprises:
matching map materials in a material library according to the scene of the image;
and overlapping the image overlapping material on the original image to generate background image information.
8. An apparatus for taking a picture in a dim light environment to supplement image information, comprising:
the image acquisition module is used for acquiring an original RAW image shot by the camera;
the useful live-action information identification module is used for preprocessing the RAW image and identifying useful live-action information in the RAW image;
the useful live-action information enhancement module is used for carrying out separation processing and enhancement processing on the useful live-action information;
the background image information generating module is used for generating background image information according to the useful real scene information;
and the synthesis module is used for synthesizing the enhanced useful live-action information and the background image information to obtain a night-action image.
9. An electronic device, comprising:
at least one memory and at least one processor;
the memory to store one or more programs;
when executed by the at least one processor, the one or more programs cause the at least one processor to perform the steps of a method for taking a photographic supplementary image information in a dim light environment according to any one of claims 1-7.
10. A computer-readable storage medium characterized by:
the computer-readable storage medium stores a computer program which, when executed by a processor, implements the steps of a method of taking a photo in a dim light environment to supplement image information as claimed in any one of claims 1 to 7.
CN202210766024.2A 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment Active CN115334234B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210766024.2A CN115334234B (en) 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment
CN202410320303.5A CN118264919A (en) 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210766024.2A CN115334234B (en) 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410320303.5A Division CN118264919A (en) 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment

Publications (2)

Publication Number Publication Date
CN115334234A true CN115334234A (en) 2022-11-11
CN115334234B CN115334234B (en) 2024-03-29

Family

ID=83917071

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210766024.2A Active CN115334234B (en) 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment
CN202410320303.5A Pending CN118264919A (en) 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202410320303.5A Pending CN118264919A (en) 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment

Country Status (1)

Country Link
CN (2) CN115334234B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872526A (en) * 2010-06-01 2010-10-27 重庆市海普软件产业有限公司 Smoke and fire intelligent identification method based on programmable photographing technology
CN107241559A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Portrait photographic method, device and picture pick-up device
CN108764370A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
EP3435284A1 (en) * 2017-07-27 2019-01-30 Rockwell Collins, Inc. Neural network foreground separation for mixed reality
WO2020038087A1 (en) * 2018-08-22 2020-02-27 Oppo广东移动通信有限公司 Method and apparatus for photographic control in super night scene mode and electronic device
WO2020119082A1 (en) * 2018-12-10 2020-06-18 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image acquisition
WO2021057277A1 (en) * 2019-09-23 2021-04-01 华为技术有限公司 Photographing method in dark light and electronic device
CN113449572A (en) * 2020-03-27 2021-09-28 西安欧思奇软件有限公司 Face unlocking method and system in dim light scene, storage medium and computer equipment thereof
WO2021204202A1 (en) * 2020-04-10 2021-10-14 华为技术有限公司 Image auto white balance method and apparatus
CN114422682A (en) * 2022-01-28 2022-04-29 安谋科技(中国)有限公司 Photographing method, electronic device, and readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872526A (en) * 2010-06-01 2010-10-27 重庆市海普软件产业有限公司 Smoke and fire intelligent identification method based on programmable photographing technology
CN107241559A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Portrait photographic method, device and picture pick-up device
EP3435284A1 (en) * 2017-07-27 2019-01-30 Rockwell Collins, Inc. Neural network foreground separation for mixed reality
CN108764370A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
WO2020038087A1 (en) * 2018-08-22 2020-02-27 Oppo广东移动通信有限公司 Method and apparatus for photographic control in super night scene mode and electronic device
WO2020119082A1 (en) * 2018-12-10 2020-06-18 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image acquisition
WO2021057277A1 (en) * 2019-09-23 2021-04-01 华为技术有限公司 Photographing method in dark light and electronic device
CN113449572A (en) * 2020-03-27 2021-09-28 西安欧思奇软件有限公司 Face unlocking method and system in dim light scene, storage medium and computer equipment thereof
WO2021204202A1 (en) * 2020-04-10 2021-10-14 华为技术有限公司 Image auto white balance method and apparatus
CN114422682A (en) * 2022-01-28 2022-04-29 安谋科技(中国)有限公司 Photographing method, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN118264919A (en) 2024-06-28
CN115334234B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
US10015469B2 (en) Image blur based on 3D depth information
CN106778928B (en) Image processing method and device
CN101422035B (en) Light source estimation device, light source estimation system, light source estimation method, device having increased image resolution, and method for increasing image resolution
CN110428366A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN106203286B (en) Augmented reality content acquisition method and device and mobile terminal
WO2021177324A1 (en) Image generating device, image generating method, recording medium generating method, learning model generating device, learning model generating method, learning model, data processing device, data processing method, inferring method, electronic instrument, generating method, program, and non-transitory computer-readable medium
CN108961302A (en) Image processing method, device, mobile terminal and computer readable storage medium
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN113572962A (en) Outdoor natural scene illumination estimation method and device
CN111833360B (en) Image processing method, device, equipment and computer readable storage medium
WO2023066173A1 (en) Image processing method and apparatus, and storage medium and electronic device
CN114862698B (en) Channel-guided real overexposure image correction method and device
CN108875751A (en) Image processing method and device, the training method of neural network, storage medium
CN106657817A (en) Processing method applied to mobile phone platform for automatically making album MV
Lee et al. Generative single image reflection separation
CN116157805A (en) Camera image or video processing pipeline using neural embedding
CN112651911A (en) High dynamic range imaging generation method based on polarization image
CN114897916A (en) Image processing method and device, nonvolatile readable storage medium and electronic equipment
CN108322648A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN112150363B (en) Convolutional neural network-based image night scene processing method, computing module for operating method and readable storage medium
CN113253890A (en) Video image matting method, system and medium
CN109063562A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN115334234B (en) Method and device for taking photo supplementary image information in dim light environment
CN112712525A (en) Multi-party image interaction system and method
CN111105369A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant