CN111741214A - Image processing method and device and electronic equipment - Google Patents
Image processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN111741214A CN111741214A CN202010405431.1A CN202010405431A CN111741214A CN 111741214 A CN111741214 A CN 111741214A CN 202010405431 A CN202010405431 A CN 202010405431A CN 111741214 A CN111741214 A CN 111741214A
- Authority
- CN
- China
- Prior art keywords
- image
- position information
- fused
- light source
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The invention provides an image processing method, an image processing device and electronic equipment, which relate to the technical field of image processing and comprise the following steps: firstly, acquiring an image to be processed; carrying out image fusion processing on the image to be processed to remove stray light spots in the image to be processed to obtain the image to be fused; then determining the position information of the light source in the image to be fused; determining a material image based on the position information, wherein the material image comprises a target light spot, and the position information of the target light spot in the material image corresponds to the position information of a light source in the image to be fused; and finally, carrying out fusion processing on the image to be fused and the material image to obtain a target fusion image containing the target light spot. According to the embodiment of the invention, the technical problem that a common camera or a mobile phone cannot shoot the starburst effect similar to a single-lens reflex camera can be solved by fusing the material image containing the target light spot and the image to be fused from which the stray light spot is removed.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
At present, the starburst effect of an image shot by a single-lens reflex camera is good, but the starburst effect similar to that of the single-lens reflex camera cannot be shot by the common camera or a mobile phone because the single-lens reflex camera is expensive, large in size, inconvenient to carry and the like and cannot be suitable for common users, halos easily appear in the image shot by the common camera or the mobile phone, and stray light spots generated by the halos influence the contrast and the definition of the image.
Disclosure of Invention
The invention aims to provide an image processing method, an image processing device and electronic equipment, so as to solve the technical problem that a common camera or a mobile phone cannot shoot a starburst effect similar to a single-lens reflex camera.
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes: acquiring an image to be processed; performing image fusion processing on the image to be processed to remove stray light spots in the image to be processed to obtain an image to be fused; determining the position information of a light source in the image to be fused; determining a material image based on the position information, wherein the material image comprises a target light spot, and the position information of the target light spot in the material image corresponds to the position information of a light source in the image to be fused; and fusing the image to be fused and the material image to obtain a target fusion image containing target light spots.
Further, the image to be processed includes: the image processing method comprises the steps of normally exposing an image to be processed and at least one under-exposed image to be processed; performing image fusion processing on the image to be processed to remove stray light spots in the image to be processed, and obtaining the image to be fused comprises: and performing HDR processing on the normally exposed image to be processed and the at least one under-exposed image to be processed to obtain the image to be fused.
Further, determining the position information of the light source in the image to be fused comprises: determining the position information of the light source in an image coordinate system according to the image to be fused to obtain first position information; the image coordinate system is determined based on the image to be fused; acquiring position information of a first camera device for acquiring the image to be processed in a world coordinate system to obtain second position information; determining third position information based on the second position information, the first position information and the world coordinate system; the third location information is used to characterize: position information of the light source in the world coordinate system; and determining the first position information and/or the third position information as the position information of the light source in the image to be fused.
Further, determining the position information of the light source in the image coordinate system according to the image to be fused comprises: preprocessing the image to be fused to obtain the preprocessed image to be fused; wherein the pre-treatment comprises at least one of: binarization treatment, corrosion treatment and expansion treatment; determining the gravity center position information of the light source in the image to be fused after preprocessing through a target gravity center function; and determining the position information of the light source in the image coordinate system based on the gravity center position information and the image coordinate system.
Further, acquiring the position information of the first camera device for acquiring the image to be processed in the world coordinate system, and obtaining the second position information includes: acquiring position information of the first camera device in the world coordinate system, which is recorded by a first direction sensor when the first camera device collects the image to be processed; and determining the recorded position information as the second position information, wherein the first direction sensor is disposed adjacent to the first image pickup device.
Further, the fusion processing of the image to be fused and the material image to obtain a target fusion image including a target light spot includes: determining attribute information of the light source in the image to be fused, and carrying out zooming processing on the material image according to the attribute information; and fusing the zoomed material image and the image to be fused to obtain a target fusion image containing the target light spot.
Further, before determining the material image, the method includes: acquiring an image containing the target light spot at least one position through a second camera device to obtain at least one reference image; determining the material image based on the position information includes: and determining the material image in the at least one reference image according to the position information of the light source in the image to be fused.
Further, the position information of the light source in the image to be fused represents the position information of the light source in the image coordinate system; determining the material image in the at least one reference image according to the position information of the light source in the image to be fused comprises: and searching a reference image corresponding to the position information of the light source in the image coordinate system in the image to be fused in the at least one reference image, and determining the searched reference image as the material image.
Further, the position information of the light source in the image to be fused represents the position information of the light source in the world coordinate system; determining the material image in the at least one reference image according to the position information of the light source in the image to be fused further comprises: acquiring the corresponding relation between the position information of a target light spot in each reference image in an image coordinate system and first attitude information, wherein the first attitude information is the attitude information of each reference image shot by a second camera device; determining the position information of the target light spot in each reference image in a world coordinate system according to the corresponding relation; determining the position information of the light source in the image to be fused in a world coordinate system according to the second attitude information and the position information of the light source in the image coordinate system in the image to be fused; the second attitude information is attitude information when the first camera device shoots an image to be fused; and determining the material image in the at least one reference image according to the position information of the target light spot in the world coordinate system in each reference image and the position information of the light source in the world coordinate system in the image to be fused.
Furthermore, the second camera device is fixed on a rotating platform with a second direction sensor built in, the rotating platform is used for rotating the second camera device so as to change the position information of the target light spot in an image coordinate system, and the second direction sensor is used for recording the position information of the second camera device in the world coordinate system.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including: the acquisition fusion unit is used for acquiring an image to be processed; performing image fusion processing on the image to be processed to remove stray light spots in the image to be processed to obtain an image to be fused; the first determining unit is used for determining the position information of the light source in the image to be fused; the second determining unit is used for determining a material image based on the position information, wherein the material image comprises a target light spot, and the position information of the target light spot in the material image corresponds to the position information of a light source in the image to be fused; and the fusion processing unit is used for carrying out fusion processing on the image to be fused and the material image to obtain a target fusion image containing a target light spot.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the method according to any one of the above first aspects.
In a fourth aspect, the present invention provides a computer-readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to execute the method according to any one of the above first aspects.
In the embodiment of the invention, an image to be processed is obtained firstly; carrying out image fusion processing on the image to be processed to remove stray light spots in the image to be processed to obtain the image to be fused; then determining the position information of the light source in the image to be fused; determining a material image based on the position information, wherein the material image comprises a target light spot, and the position information of the target light spot in the material image corresponds to the position information of a light source in the image to be fused; and finally, carrying out fusion processing on the image to be fused and the material image to obtain a target fusion image containing the target light spot. According to the method and the device for fusing the material image containing the target light spot and the image to be fused with the stray light spot removed, the technical problem that a common camera or a mobile phone cannot shoot the starburst effect similar to a single-lens reflex camera can be solved, and therefore the technical effect that the common camera or the mobile phone shoots the starburst effect similar to the single-lens reflex camera is achieved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an electronic device provided in accordance with an embodiment of the invention;
FIG. 2 is a flow chart of a method of image processing according to an embodiment of the present invention;
fig. 3 is a flowchart of step S204 in an image processing method according to an embodiment of the present invention;
fig. 4 is a flowchart of step S301 in an image processing method according to an embodiment of the present invention;
fig. 5 is a flowchart of step S208 in an image processing method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an image processing method applied to a sunrise or sunset scene according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of determining starburst proof in an image coordinate system according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of determining starburst proof in a world coordinate system according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a target fusion image provided in accordance with an embodiment of the present invention;
fig. 10 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
first, an electronic device 100 for implementing an embodiment of the present invention, which can be used to execute an image processing method according to embodiments of the present invention, is described with reference to fig. 1.
As shown in FIG. 1, electronic device 100 includes one or more processing devices 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processing device 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processing device 102 to implement client functionality (implemented by a processor) and/or other desired functionality in embodiments of the present invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
Exemplarily, an exemplary electronic device for implementing the image processing method according to the embodiment of the present invention may be implemented on a mobile terminal such as a smartphone, a tablet computer, or the like.
Example 2:
according to an embodiment of the present invention, there is provided an embodiment of an image processing method, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present invention, as shown in fig. 2, the method including the steps of:
step S202, acquiring an image to be processed; carrying out image fusion processing on the image to be processed to remove stray light spots in the image to be processed to obtain the image to be fused;
in the embodiment of the invention, the image to be processed can be at least two preview images shot by a common camera or a mobile phone, and different preview images have different exposure values. Because the exposure values are different, stray light spots with different brightness and sizes exist in preview images with different exposure values. Stray light spots create halo effects in the preview image and thus affect the contrast and sharpness of the preview image. If the stray light spots are directly removed by adopting a mode of reducing the exposure value, the condition that image information of areas except for the stray light spots is lost easily occurs in the image to be processed. Therefore, in order to remove the stray light spots in the image to be processed and prevent the image information of the region except for the stray light spots from being lost in the image to be processed, the embodiment performs image fusion processing on the image to be processed to obtain the image to be fused.
Step S204, determining the position information of the light source in the image to be fused;
in embodiments of the present invention, light sources include, but are not limited to: sun, electric lights, and candles. The stray light spots and the light sources exist in the image to be processed obtained in the step S202, and the position relationship between the stray light spots and the light sources is close, so that if the position information of the light sources in the image to be processed is positioned, the positioning is easily inaccurate due to the interference of the stray light spots. The image to be fused obtained after the stray light spots in the image to be processed are removed is not interfered by the stray light spots any more, so that the accuracy of the position information of the light source in the image to be fused is improved.
Step S206, determining a material image based on the position information, wherein the material image comprises a target light spot, and the position information of the target light spot in the material image corresponds to the position information of the light source in the image to be fused;
in the embodiment of the invention, the image to be fused comprises a light source, the material image comprises a target light spot, and the position information of the target light spot in the material image can be in one of the following two forms, wherein one form is the position information of the target light spot in the material image in a world coordinate system, and the other form is the position information of the target light spot in the material image in an image coordinate system; similarly, the position information of the light source in the image to be fused is also in one of the following two forms, one is the position information of the light source in the image to be fused in the world coordinate system (i.e. the third position information in step S301 described below), and the other is the position information of the light source in the image coordinate system (i.e. the first position information in step S303 described below). After the position information of the light source in the image to be fused is determined, the material image in the same form is determined according to the form of the position information of the light source in the image to be fused.
And S208, fusing the image to be fused and the material image to obtain a target fusion image containing the target light spot.
The present embodiment may refer to a picture effect formed by the target light spot in the material image as a starburst effect, where the starburst effect may be understood as shooting the light irradiated by the light source into a cross or a similar radioactive light burst during shooting. The starburst effect of the material image is a picture effect which cannot be presented by a light source in the image to be fused, so that the starburst effect generated by the target light spot can be presented in the image to be fused in a mode of fusing the material image containing the target light spot generating the starburst effect into the image to be fused, and the defect that the starburst effect cannot be presented in the image to be fused is overcome.
In the embodiment of the invention, the technical problem that a common camera or a mobile phone cannot shoot the starburst effect similar to a single-lens reflex camera can be solved by fusing the image to be fused and the material image with the starburst effect, so that the technical effect that the common camera or the mobile phone shoots the starburst effect similar to the single-lens reflex camera is realized.
The above-described image processing method will be described below with reference to specific embodiments.
As can be seen from the above description, in the embodiment of the present invention, an image to be processed is first obtained, and then image fusion processing is performed on the image to be processed, so as to remove stray light spots in the image to be processed, and obtain an image to be fused.
In an alternative embodiment, the image to be processed comprises: the image processing method comprises the steps of normally exposing an image to be processed and at least one under-exposed image to be processed; in the step S202, performing image fusion processing on the image to be processed to remove the stray light spots in the image to be processed, and obtaining the image to be fused includes the following steps: and performing HDR processing on the normally exposed to-be-processed image and the at least one under-exposed to-be-processed image to obtain an image to be fused.
In the embodiment of the invention, stray light spots in the image to be processed can generate a halo effect in the preview image, and the halo effect generated by the stray light spots in the preview image can be eliminated by removing the stray light spots in the image to be processed. The lower the exposure value of the underexposed to-be-processed image is, the less obvious the halo effect of the underexposed to-be-processed image is, but the lower the definition of the entire picture of the underexposed to-be-processed image is. In the embodiment, the to-be-fused image is subjected to HDR processing by using the underexposed to-be-processed image with an unobvious halo effect and the normally exposed to-be-processed image with higher definition, so that stray light spots are removed on the basis of keeping higher definition.
In an alternative embodiment, as shown in fig. 3, the step S204 of determining the position information of the light source in the image to be fused includes the following steps:
step S301, determining the position information of the light source in an image coordinate system according to the image to be fused to obtain first position information; the image coordinate system is determined based on the image to be fused;
step S302, acquiring position information of a first camera device used for collecting an image to be processed in a world coordinate system to obtain second position information.
Specifically, in the present application, attitude information of a first camera device used for acquiring an image to be processed is acquired, and position information of the first camera device in a world coordinate system is determined based on the attitude information of the first camera device, so as to obtain second position information, where the position information of the first camera device in the world coordinate system may be acquired by a first direction sensor, which may be a type of direction sensor mounted on the first camera device, and the type of the first direction sensor is not specifically limited in the present application.
Step S303, determining third position information based on the second position information, the first position information and the world coordinate system; the third location information is used to characterize: position information of the light source in a world coordinate system;
and step S304, determining the first position information and/or the third position information as the position information of the light source in the image to be fused.
In an embodiment of the invention, the first position information and the third position information are descriptions of position information of the light source in different coordinate systems. The conversion relationship between the image coordinate system and the world coordinate system is the prior art, and is not described herein. The present embodiment may obtain the position information (i.e., the third position information) of the light source in the world coordinate system based on the position information of the light source in the image coordinate system, the position information (i.e., the second position information) of the first camera device in the world coordinate system for acquiring the image to be processed, and the conversion relationship between the image coordinate system and the world coordinate system.
In an alternative embodiment, as shown in fig. 4, the step S301 of determining the position information of the light source in the image coordinate system according to the image to be fused includes the following steps:
step S401, preprocessing an image to be fused to obtain a preprocessed image to be fused; wherein the pre-treatment comprises at least one of: binarization treatment, corrosion treatment and expansion treatment;
in the embodiment of the invention, the area where the light source is located in the image to be fused can be determined by using binarization processing, and the boundary of the area where the light source is located in the image to be fused can be smoothed by using erosion processing and expansion processing. Since the preprocessing such as binarization processing, erosion processing, dilation processing, etc. is a common processing method in the image segmentation technology, the specific operations of the preprocessing are not described in detail herein.
Step S402, determining the gravity center position information of the light source in the image to be fused after preprocessing through a target gravity center function;
in the embodiment of the present invention, the target barycentric function may be a "regionprops" function in matlab software, or may be other barycentric functions, and the specific expression of the barycentric function is not specifically limited in this embodiment. According to the embodiment, accurate calculation of the gravity center position information of the light source in the image to be fused after preprocessing can be achieved by calling a 'regionprops' function in matlab software. In this application, the center of gravity of a light source may be understood as the centroid of the light source.
As is apparent from the above description, in the present application, the barycentric location information of the light sources is determined by the images to be fused after the preprocessing. However, since the information of the barycentric position of the light source in the underexposed image to be processed and the information of the barycentric position of the light source in the image to be fused after the preprocessing coincide, in the present application, the information of the barycentric position of the light source can also be determined from the underexposed image to be processed.
For example, the information of the center of gravity position of the light source may be obtained for an underexposed image to be processed whose exposure value is EV-4. At this time, the light source is generally a highlight portion in the screen, the value of the highlight region is still in a relatively high state, the value of the non-highlight portion is lowered because the screen is in an underexposure state, and then a threshold determination range is set, for example: the area with the brightness more than 200 (the pixel brightness range is 0-255) in the picture is regarded as a light spot area; next, the image is subjected to binarization processing, for example, a region having a pixel value of more than 200 is set to 1, i.e., a highlight region, and a region having a pixel value of less than 200 is set to 0, i.e., a non-highlight region; then, carrying out corrosion and expansion treatment on the image, wherein the corrosion and expansion operation is to enable the edge burrs of the light spots to become smoother and to enable the shapes of the light spots to become regular as much as possible, so that a 'regionprops' function can acquire a more accurate centroid; finally, the centroid of the light source can be obtained by solving the center of the image through matlab 'regionprops' function.
In step S403, position information of the light source in the image coordinate system is determined based on the center-of-gravity position information and the image coordinate system.
In the embodiment of the invention, the gravity center position information of the light source in the image to be fused can be accurately determined through the operations of the steps S401 to S403, and further the position information of the light source in the image coordinate system can be determined.
In an alternative embodiment, in step S302, acquiring the position information of the first camera device in the world coordinate system for acquiring the image to be processed, and obtaining the second position information includes the following steps: acquiring position information of the first camera device in a world coordinate system, which is recorded by a first direction sensor when the first camera device collects an image to be processed; and determining position information of the first camera in a world coordinate system as second position information, wherein the first direction sensor is arranged adjacent to the first camera.
In the embodiment of the present invention, the first image pickup device may be a general camera or a mobile phone, and the first direction sensor may be a type of direction sensor mounted on the general camera or the mobile phone. The first direction sensor is used for recording the position information (namely, the second position information) of the first camera device in the world coordinate system, and the purpose of recording the position information of the first camera device in the world coordinate system is to accurately determine the position information (namely, the third position information) of the light source in the world coordinate system.
In an alternative embodiment, as shown in fig. 5, in step S208, the step of performing fusion processing on the image to be fused and the material image to obtain a target fusion image including a target light spot includes the following steps:
step S501, determining attribute information of a light source in an image to be fused, and zooming a material image according to the attribute information; the attribute information may be information such as the size and dimension of the light source in the image to be fused. At this time, the material image can be scaled according to the size or dimension of the light source in the image to be fused. The purpose of zooming the material image is to make the size of the target light spot in the material image close to the size of the light source in the image to be fused, and at the moment, when the material image and the image to be fused are fused with each other with the light spots, the fusion effect between the target light spot in the material image and the light source in the image to be fused can be improved, so that a satisfactory target fusion image is obtained.
And step S502, fusing the zoomed material image and the image to be fused to obtain a target fusion image containing the target light spot.
In an embodiment of the present invention, the fusion process may refer to: and fusing the target light spot in the zoomed material image with the light source in the image to be fused, wherein the fusion process can enhance the starburst effect of the target fusion image.
In an alternative embodiment, before determining the material image in step S206, the following steps may be included: acquiring an image containing a target light spot at least one position through a second camera device to obtain at least one reference image; step S206, determining the material image based on the position information includes: and determining the material image in at least one reference image according to the position information of the light source in the image to be fused.
In the embodiment of the invention, the second camera device can be a single-lens reflex camera, and compared with a common camera, the single-lens reflex camera can shoot a starburst effect well. The reference images can be called starburst samples, each reference image is stored corresponding to the position information of the target light spot in the reference image, the position information of the target light spot can be determined according to the position information of the light source in the image to be fused, and then the reference image corresponding to the position information of the target light spot can be selected from at least one reference image to serve as a material image.
Since the position information of the light source includes: two description modes of the position information of the light source in the image coordinate system and the position information of the light source in the world coordinate system are described, so the modes of determining the material image in at least one reference image according to the position information of the light source in the image to be fused are divided into the following two modes:
the first method is as follows: and the position information of the light source in the image to be fused represents the position information of the light source in the image coordinate system.
And searching a reference image corresponding to the position information of the light source in the image coordinate system in the image to be fused in at least one reference image, and determining the searched reference image as a material image.
The second method comprises the following steps: and the position information of the light source in the image to be fused represents the position information of the light source in the world coordinate system.
Acquiring the corresponding relation between the position information of the target light spot in each reference image in an image coordinate system and first attitude information, wherein the first attitude information is the attitude information of each reference image shot by a second camera device; determining the position information of the target light spot in each reference image in a world coordinate system according to the corresponding relation; determining the position information of the light source in the world coordinate system in the image to be fused according to the second attitude information and the position information of the light source in the image coordinate system in the image to be fused; the second posture information is the posture information of the first camera shooting the image to be fused; and determining the material image in at least one reference image according to the position information of the target light spot in the world coordinate system in each reference image and the position information of the light source in the world coordinate system in the image to be fused.
In the embodiment of the invention, the target light spots in different reference images have different position information in the image coordinate system and different positions in the world coordinate system. Therefore, the corresponding material images can be searched from the multiple reference images no matter according to the position information of the light source in the image coordinate system in the image to be fused or according to the position information of the light source in the world coordinate system in the image to be fused. It should be noted that, in the process of searching for a corresponding material image, when the position information of the target light spot in a reference image in the image coordinate system is closest to the position information of the light source in the image coordinate system, the reference image can be used as the corresponding material image; or when the position information of the target light spot in the other reference image in the image coordinate system falls within a preset range with the position information of the light source in the image coordinate system as the center, the other reference image can be used as a corresponding material image. Therefore, the present application does not specifically limit the specific conditions for determining the material image.
In an alternative embodiment, the second camera device is fixed on a rotating platform with a second direction sensor built in, the rotating platform is used for rotating the second camera device so as to change the position information of the target light spot in the image coordinate system, and the second direction sensor is used for recording the position information of the second camera device in the world coordinate system.
In the embodiment of the present invention, the second imaging device may be used to capture a reference image, and the second imaging device is fixed on a rotating platform with a second direction sensor built therein, where the rotating platform is used to rotate the second imaging device, so as to change the position information of the target light spot in the image coordinate system, thereby enabling the light source to appear at different positions in the reference sample. Meanwhile, the second direction sensor records the position information of the second camera in the world coordinate system, and the position information of the target light spot in the world coordinate system can be determined based on the position information of the second camera in the world coordinate system and the position information of the target light spot in the image coordinate system. And determining the position information of the light source in the world coordinate according to the position information of the first camera device in the world coordinate and the position information of the light source in the image coordinate system. And determining the material image from at least one reference image according to the position information of the light source in the world coordinate system and the position information of the target light spot in the world coordinate system.
The embodiment of the invention can carry out HDR processing on an underexposed image to be processed which is acquired by a common camera or a mobile phone and has an unobvious halo effect and a normally exposed image to be processed which has higher definition to obtain an image to be fused, and then obtain the preprocessed image to be fused based on an image segmentation technology; and finally, a common camera or a mobile phone shoots a starburst effect similar to a single lens reflex in a mode of carrying out fusion processing on the material image and the image to be fused.
Example 3:
on the basis of the foregoing embodiments, the present embodiment provides an example in which an image processing method is applied in a sunrise (or sunset) scene.
Taking a sunrise scene as an example, an image (to-be-processed image) including the sun acquired by using a mobile phone as a first imaging device often cannot exhibit a starburst effect similar to an image (material image) captured by a single-lens reflex camera. In order to improve the shooting effect (starburst effect) of the mobile phone, in the embodiment of the present invention, an HDR processing technology, an image segmentation technology, and a fusion processing technology are used to perform fusion processing on an image including the sun acquired by the mobile phone through a starburst proof sheet, so that the mobile phone can achieve the shooting effect of a single lens reflex camera, and referring to fig. 7 and 8, a specific flow of an image processing method may be specifically described in the following three parts:
the method comprises the following steps of firstly, collecting starburst samples and carrying out subsequent pretreatment on the starburst samples:
step 1, in order to realize the collection of the starburst sample, in this embodiment, under a dark darkroom condition, a single-phase inverter (for example, a large three-element) is fixed on a rotating platform with a built-in direction sensor (a second direction sensor), and a preset light source (for example, a light-emitting lamp) is fixed at a certain spatial position with a preset distance from the single-phase inverter. When the rotating platform rotates, the attitude information of the single lens reflex recorded by the direction sensor arranged in the rotating platform when the single lens reflex shoots an image is changed, and the position information of the single lens reflex in the world coordinate system is also changed. In the embodiment, the posture information of the single lens reflex camera during image shooting is adjusted by continuously rotating the rotating platform, so that a plurality of starburst samples of different position information of the target light spot in the image coordinate system are acquired.
It should be noted that the preset light source is defaulted to a circular light source, and the following two factors are mainly considered: first, most of the light emitted by the light source is circular in shape; second, the light source approximates a circular light source in starburst eyes at a distance from the photographer. In addition, the direction sensor arranged in the rotary platform records and displays the posture information of the single lens reflex camera when the starburst proof is shot, and convenience can be provided for the mobile phone to call the starburst proof with the same position information in the world coordinate system.
And 2, fixing the light source, and rotating the rotary platform to enable the single-lens reflex camera to shoot a plurality of starburst samples, namely, light spots are distributed at different positions in different starburst sample pictures at different moments (as shown in fig. 6). After one distance is adjusted, the distance between the single-lens reflex camera and the preset light source is adjusted, and then a plurality of starburst samples at a second distance can be collected. When the single-lens reflex camera is used for shooting the starburst sample sheets, the posture information of the single-lens reflex camera corresponding to each starburst sample sheet when shooting the starburst sample sheets can be recorded. Generally, one starburst exists in one starburst sample sheet, one distance and one rotation angle correspond to one starburst sample sheet, and the total number of all the starburst sample sheets is the product of the total number of preset distances, X (rotation angle along X axis), Y (rotation angle along Y axis) and Z (rotation angle along Z axis).
And 3, preprocessing the starburst sample by utilizing an image segmentation technology, and further calculating the position information of the target light spot in the starburst sample, wherein the preprocessing comprises but is not limited to: binarization treatment, corrosion treatment and expansion treatment.
In the actual operation process, the collection of the starburst sample sheet is performed in a darkroom, so that the target light spot is a high-brightness part in a starburst sample sheet picture, in the embodiment, a threshold determination range (for example, an area with brightness larger than 200 in the picture is used as an area where the target light spot is located) can be set by analyzing the image brightness in the starburst sample sheet picture, then binarization processing is performed on the starburst sample sheet, after corrosion processing and expansion processing are performed on the starburst sample sheet after binarization processing, edge burrs of the target light spot can be relatively smooth, the shape of the target light spot is relatively regular, and finally, through a 'regiontips' function in matlab software, the gravity center position of the target light spot, namely the position information of the target light spot in an image coordinate system in the starburst sample sheet can be obtained.
The position of the center of gravity of the target light spot is obtained, which is equivalent to the centroid of the target light spot, and the technology for obtaining the centroid of the target light spot is relatively mature, so that the embodiment of the invention does not specifically describe the centroid of the target light spot.
After the position information of the target light spot in the image coordinate system in the starburst sample sheet is determined, the position information of the target light spot in the image coordinate system in the starburst sample sheet can be converted into the position information of the target light spot in the world coordinate system in the starburst sample sheet because the direction sensor records the position information of the single lens reflex camera in the world coordinate system.
The second part, the mobile phone obtains the image to be processed, and the image to be processed is preprocessed:
step 1, when exposure values are set to EV0, EV-2, EV-4, EV-6 and EV-8, a scene containing the sun is photographed by a mobile phone, 5 to-be-processed images obtained through photographing are subjected to HDR processing, and the halo effect caused by stray light in the to-be-processed images can be reduced.
In the embodiment, the halo effect of the image to be processed corresponding to EV0 is the most obvious, and the halo effect of EV-8 is the least obvious. The halo effect is generally reduced by fusing the images to be processed corresponding to EV-2 to EV-8 with the images to be processed corresponding to EV 0. Images to be processed of different EV gears are fused, so that the transition of the finally fused image (the image to be fused) is smoother and more accurate. Although the halo effect of the image to be processed corresponding to EV-8 is least obvious, the image easily causes image information loss in other regions except for the region where the sun is located, so the image to be processed in different EV gears can be selected to perform fusion processing in combination with the environment in the sunrise scene.
And 2, preprocessing the fused image by utilizing an image segmentation technology, and further calculating the position information of the sun in the fused image, wherein the preprocessing comprises but is not limited to: binarization treatment, corrosion treatment and expansion treatment.
In this embodiment, calculating the position information (first position information) of the sun in the image coordinate system is equivalent to obtaining the centroid of the sun, and obtaining the centroid of the sun is similar to the step of obtaining the centroid of the target spot, so the step of obtaining the centroid of the sun is not specifically described in this embodiment of the present invention. After the position information of the sun in the image coordinate system is determined, the position information (second position information) of the mobile phone in the world coordinate system is determined based on the direction sensor of the mobile phone, and finally the position information of the sun in the image coordinate system is converted into the position information (third position information) of the sun in the world coordinate system by utilizing the position information of the mobile phone in the world coordinate system and the position information of the sun in the image coordinate system.
And the third part is used for fusing the fused image and the starburst proof sheet with the stray light spots removed:
step 1, there are two ways to search for the starburst proof corresponding to the fused image without the stray light spots, one way is to determine the starburst proof corresponding to the position information in the image coordinate system by using the position information of the sun in the image coordinate system, and the other way is to determine the starburst proof corresponding to the position information in the world coordinate system by using the position information of the sun in the world coordinate system, and the two ways are respectively shown in fig. 7 and 8.
and 3, performing fusion processing on the zoomed starburst sample sheet and the fusion image without the stray light spots to obtain a target fusion image with a better starburst effect, as shown in fig. 9.
The embodiment of the invention can improve the shooting effect of the mobile phone in the sunrise scene by using the mode of carrying out fusion processing on the zoomed starburst proof and the fusion image without the stray light spots.
Example 4:
an embodiment of the present invention further provides an image processing apparatus, which is mainly used for executing the image processing method provided by the foregoing content of the embodiment of the present invention, and the image processing apparatus provided by the embodiment of the present invention is specifically described below.
Fig. 10 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention, which mainly includes, as shown in fig. 10, an acquisition fusion unit 10, a first determination unit 20, a second determination unit 30, and a fusion processing unit 40, in which:
the acquisition and fusion unit 10 is used for acquiring an image to be processed; carrying out image fusion processing on the image to be processed to remove stray light spots in the image to be processed to obtain the image to be fused;
a first determining unit 20, configured to determine position information of a light source in an image to be fused;
the second determining unit 30 is configured to determine a material image according to the position information, where the material image includes a target light spot, and the position information of the target light spot in the material image corresponds to the position information of the light source in the image to be fused;
and the fusion processing unit 40 is configured to perform fusion processing on the image to be fused and the material image to obtain a target fusion image including the target light spot.
In the embodiment of the invention, an image to be processed is acquired by an acquisition and fusion unit 10; carrying out image fusion processing on the image to be processed to remove stray light spots in the image to be processed and obtain the image to be fused; then, the first determining unit 20 is used for determining the position information of the light source in the image to be fused; determining a material image by using a second determining unit 30, wherein the material image comprises a target light spot, and the position information of the target light spot in the material image corresponds to the position information of the light source in the image to be fused; and finally, performing fusion processing on the image to be fused and the material image by using a fusion processing unit 40 to obtain a target fusion image containing the target light spot. According to the method and the device for fusing the material image containing the target light spot and the image to be fused with the stray light spot removed, the technical problem that a common camera or a mobile phone cannot shoot the starburst effect similar to a single-lens reflex camera can be solved, and therefore the technical effect that the common camera or the mobile phone shoots the starburst effect similar to the single-lens reflex camera is achieved.
Optionally, the image to be processed comprises: the image processing method comprises the steps of normally exposing an image to be processed and at least one under-exposed image to be processed; the acquisition fusion unit 10 is configured to: and performing HDR processing on the normally exposed to-be-processed image and the at least one under-exposed to-be-processed image to obtain an image to be fused.
Optionally, the first determination unit 20 includes: a first determining subunit, an acquiring subunit, a second determining subunit and a third determining subunit, wherein:
the first determining subunit is used for determining first position information of the light source in the image coordinate system according to the image to be fused; the image coordinate system is determined based on the image to be fused;
the acquisition subunit is used for acquiring the position information of the first camera device for acquiring the image to be processed in the world coordinate system to obtain second position information;
a second determining subunit, configured to determine third position information based on the second position information, the first position information, and the world coordinate system; the third location information is used to characterize: position information of the light source in a world coordinate system;
and the third determining subunit is used for determining the first position information and/or the third position information as the position information of the light source in the image to be fused.
Optionally, the first determining subunit includes: a pre-processing module, a first determining module and a second determining module, wherein:
the pre-processing module is used for pre-processing the image to be fused to obtain the pre-processed image to be fused; wherein the pre-treatment comprises at least one of: binarization treatment, corrosion treatment and expansion treatment;
the first determining module is used for determining the gravity center position information of the light source in the image to be fused after preprocessing through a target gravity center function;
and the second determining module is used for determining the position information of the light source in the image coordinate system based on the gravity center position information and the image coordinate system.
Optionally, the acquiring subunit is configured to acquire position information of the first camera device in the world coordinate system, which is recorded by the first direction sensor when the first camera device acquires the image to be processed; and determining the recorded position information as second position information, wherein the first direction sensor is disposed adjacent to the first image pickup device.
Optionally, the fusion processing unit 40 includes a fourth determining subunit and a fusion processing subunit, wherein:
the fourth determining subunit is used for determining the attribute information of the light source in the image to be fused and carrying out zooming processing on the material image according to the attribute information;
and the fusion processing subunit is used for performing fusion processing on the zoomed material image and the image to be fused to obtain a target fusion image containing the target light spot.
Optionally, before determining the material image, the image processing apparatus is configured to: acquiring an image containing a target light spot at least one position through a second camera device to obtain at least one reference image; and the second determining unit in the image processing device is used for determining the material image in at least one reference image according to the position information of the light source in the image to be fused.
Optionally, the position information of the light source in the image to be fused represents the position information of the light source in an image coordinate system; the apparatus is also configured to: and searching a reference image corresponding to the position information of the light source in the image coordinate system in the image to be fused in at least one reference image, and determining the searched reference image as a material image.
Optionally, the position information of the light source in the image to be fused represents the position information of the light source in a world coordinate system; the apparatus is also configured to: acquiring the corresponding relation between the position information of the target light spot in each reference image in an image coordinate system and first attitude information, wherein the first attitude information is the attitude information of each reference image shot by a second camera device; determining the position information of the target light spot in each reference image in a world coordinate system according to the corresponding relation; determining the position information of the light source in the world coordinate system in the image to be fused according to the second attitude information and the position information of the light source in the image coordinate system in the image to be fused; the second posture information is the posture information of the first camera shooting the image to be fused; and determining the material image in at least one reference image according to the position information of the target light spot in the world coordinate system in each reference image and the position information of the light source in the world coordinate system in the image to be fused.
Optionally, the second camera device is fixed on a rotating platform with a second direction sensor built therein, the rotating platform is used for rotating the second camera device so as to change the position information of the target light spot in the image coordinate system, and the second direction sensor is used for recording the position information of the second camera device in the world coordinate system.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
Further, the present embodiment also provides a computer-readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to perform the steps of the method provided by the foregoing method embodiment.
The image processing method, the image processing apparatus, and the computer program product of the electronic device provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the present embodiment, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be configured and operated in a specific orientation, and thus, should not be construed as limiting the present embodiment. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in this embodiment, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present embodiment or parts of the technical solution may be essentially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.
Claims (13)
1. An image processing method, comprising:
acquiring an image to be processed; performing image fusion processing on the image to be processed to remove stray light spots in the image to be processed to obtain an image to be fused;
determining the position information of a light source in the image to be fused;
determining a material image based on the position information, wherein the material image comprises a target light spot, and the position information of the target light spot in the material image corresponds to the position information of a light source in the image to be fused;
and fusing the image to be fused and the material image to obtain a target fusion image containing target light spots.
2. The method of claim 1, wherein the image to be processed comprises: the image processing method comprises the steps of normally exposing an image to be processed and at least one under-exposed image to be processed;
performing image fusion processing on the image to be processed to remove stray light spots in the image to be processed, and obtaining the image to be fused comprises:
and performing HDR processing on the normally exposed image to be processed and the at least one under-exposed image to be processed to obtain the image to be fused.
3. The method of claim 1, wherein determining the position information of the light source in the image to be fused comprises:
determining the position information of the light source in an image coordinate system according to the image to be fused to obtain first position information; the image coordinate system is determined based on the image to be fused;
acquiring position information of a first camera device for acquiring the image to be processed in a world coordinate system to obtain second position information; determining third position information based on the second position information, the first position information and the world coordinate system; the third location information is used to characterize: position information of the light source in the world coordinate system;
and determining the first position information and/or the third position information as the position information of the light source in the image to be fused.
4. The method of claim 3, wherein determining the position information of the light source in the image coordinate system according to the image to be fused comprises:
preprocessing the image to be fused to obtain the preprocessed image to be fused; wherein the pre-treatment comprises at least one of: binarization treatment, corrosion treatment and expansion treatment;
determining the gravity center position information of the light source in the image to be fused after preprocessing through a target gravity center function;
and determining the position information of the light source in the image coordinate system based on the gravity center position information and the image coordinate system.
5. The method of claim 3, wherein obtaining position information of a first camera device used for acquiring the image to be processed in the world coordinate system, and obtaining second position information comprises:
acquiring position information of the first camera device in the world coordinate system, which is recorded by a first direction sensor when the first camera device collects the image to be processed; and determining the recorded position information as the second position information, wherein the first direction sensor is disposed adjacent to the first image pickup device.
6. The method according to claim 1, wherein the fusing the image to be fused and the material image to obtain a target fused image including a target spot comprises:
determining attribute information of the light source in the image to be fused, and carrying out zooming processing on the material image according to the attribute information;
and fusing the zoomed material image and the image to be fused to obtain a target fusion image containing the target light spot.
7. The method of claim 1,
before determining the material image, the method comprises the following steps: acquiring an image containing the target light spot at least one position through a second camera device to obtain at least one reference image;
determining the material image based on the position information includes: and determining the material image in the at least one reference image according to the position information of the light source in the image to be fused.
8. The method according to claim 7, wherein the position information of the light source in the image to be fused represents the position information of the light source in an image coordinate system;
determining the material image in the at least one reference image according to the position information of the light source in the image to be fused comprises:
and searching a reference image corresponding to the position information of the light source in the image coordinate system in the image to be fused in the at least one reference image, and determining the searched reference image as the material image.
9. The method according to claim 7, wherein the position information of the light source in the image to be fused represents the position information of the light source in a world coordinate system;
determining the material image in the at least one reference image according to the position information of the light source in the image to be fused further comprises:
acquiring the corresponding relation between the position information of a target light spot in each reference image in an image coordinate system and first attitude information, wherein the first attitude information is the attitude information of each reference image shot by a second camera device;
determining the position information of the target light spot in each reference image in a world coordinate system according to the corresponding relation;
determining the position information of the light source in the image to be fused in a world coordinate system according to the second attitude information and the position information of the light source in the image coordinate system in the image to be fused; the second attitude information is attitude information when the first camera device shoots an image to be fused;
and determining the material image in the at least one reference image according to the position information of the target light spot in the world coordinate system in each reference image and the position information of the light source in the world coordinate system in the image to be fused.
10. The method according to claim 7, wherein the second camera is fixed on a rotating platform with a second direction sensor built therein, the rotating platform is used for rotating the second camera to change the position information of the target light spot in an image coordinate system, and the second direction sensor is used for recording the position information of the second camera in the world coordinate system.
11. An image processing apparatus characterized by comprising:
the acquisition fusion unit is used for acquiring an image to be processed; performing image fusion processing on the image to be processed to remove stray light spots in the image to be processed to obtain an image to be fused;
the first determining unit is used for determining the position information of the light source in the image to be fused;
the second determining unit is used for determining a material image based on the position information, wherein the material image comprises a target light spot, and the position information of the target light spot in the material image corresponds to the position information of a light source in the image to be fused;
and the fusion processing unit is used for carrying out fusion processing on the image to be fused and the material image to obtain a target fusion image containing a target light spot.
12. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method according to any one of claims 1 to 10 when executing the computer program.
13. A computer-readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method of any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010405431.1A CN111741214A (en) | 2020-05-13 | 2020-05-13 | Image processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010405431.1A CN111741214A (en) | 2020-05-13 | 2020-05-13 | Image processing method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111741214A true CN111741214A (en) | 2020-10-02 |
Family
ID=72647260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010405431.1A Pending CN111741214A (en) | 2020-05-13 | 2020-05-13 | Image processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111741214A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112884689A (en) * | 2021-02-25 | 2021-06-01 | 景德镇陶瓷大学 | Highlight removing method for image on strong reflection surface |
CN112967201A (en) * | 2021-03-05 | 2021-06-15 | 厦门美图之家科技有限公司 | Image illumination adjusting method and device, electronic equipment and storage medium |
CN113052056A (en) * | 2021-03-19 | 2021-06-29 | 华为技术有限公司 | Video processing method and device |
CN116051434A (en) * | 2022-07-22 | 2023-05-02 | 荣耀终端有限公司 | Image processing method and related electronic equipment |
CN117928386A (en) * | 2024-03-22 | 2024-04-26 | 四川拓及轨道交通设备股份有限公司 | Portable binocular contact net geometric parameter detection system and method |
Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103297702A (en) * | 2013-05-06 | 2013-09-11 | 中航华东光电有限公司 | Image processing device for aviation onboard helmet-mounted locating system and image processing method thereof |
CN103871070A (en) * | 2014-04-03 | 2014-06-18 | 深圳市德赛微电子技术有限公司 | Automatic calibration method of vehicle-mounted panoramic imaging system |
CN104376545A (en) * | 2013-08-16 | 2015-02-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104735347A (en) * | 2013-12-24 | 2015-06-24 | 三星泰科威株式会社 | Autofocus adjusting method and apparatus |
CN105303161A (en) * | 2015-09-21 | 2016-02-03 | 广东欧珀移动通信有限公司 | Method and device for shooting multiple people |
CN105608664A (en) * | 2014-11-19 | 2016-05-25 | 深圳市腾讯计算机系统有限公司 | Photo processing method and terminal |
WO2017000664A1 (en) * | 2015-06-30 | 2017-01-05 | 华为技术有限公司 | Photographing method and apparatus |
CN107770454A (en) * | 2017-09-05 | 2018-03-06 | 努比亚技术有限公司 | A kind of image processing method, terminal and computer-readable recording medium |
CN108332748A (en) * | 2017-12-18 | 2018-07-27 | 中国电子科技集团公司电子科学研究院 | A kind of indoor visible light localization method and device |
CN108846377A (en) * | 2018-06-29 | 2018-11-20 | 百度在线网络技术(北京)有限公司 | Method and apparatus for shooting image |
CN108900778A (en) * | 2018-06-27 | 2018-11-27 | 努比亚技术有限公司 | A kind of image pickup method, mobile terminal and computer readable storage medium |
US20180352132A1 (en) * | 2017-05-31 | 2018-12-06 | Guangdong Oppo Mobile Telecommunications Corp., Lt D. | Image processing method and related products |
CN109035155A (en) * | 2018-06-15 | 2018-12-18 | 宁波大学 | A kind of more exposure image fusion methods of halation removal |
CN109241956A (en) * | 2018-11-19 | 2019-01-18 | Oppo广东移动通信有限公司 | Method, apparatus, terminal and the storage medium of composograph |
CN109285190A (en) * | 2018-09-06 | 2019-01-29 | 广东天机工业智能系统有限公司 | Object positioning method, device, electronic equipment and storage medium |
CN109360176A (en) * | 2018-10-15 | 2019-02-19 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN109978754A (en) * | 2017-12-28 | 2019-07-05 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN110146869A (en) * | 2019-05-21 | 2019-08-20 | 北京百度网讯科技有限公司 | Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter |
CN110174093A (en) * | 2019-05-05 | 2019-08-27 | 腾讯科技(深圳)有限公司 | Localization method, device, equipment and computer readable storage medium |
CN110245611A (en) * | 2019-06-14 | 2019-09-17 | 腾讯科技(深圳)有限公司 | Image-recognizing method, device, computer equipment and storage medium |
CN110971822A (en) * | 2019-11-29 | 2020-04-07 | Oppo广东移动通信有限公司 | Picture processing method and device, terminal equipment and computer readable storage medium |
CN111127358A (en) * | 2019-12-19 | 2020-05-08 | 苏州科达科技股份有限公司 | Image processing method, device and storage medium |
CN111145135A (en) * | 2019-12-30 | 2020-05-12 | 腾讯科技(深圳)有限公司 | Image descrambling processing method, device, equipment and storage medium |
CN111209775A (en) * | 2018-11-21 | 2020-05-29 | 杭州海康威视数字技术股份有限公司 | Signal lamp image processing method, device, equipment and storage medium |
CN111433813A (en) * | 2017-12-07 | 2020-07-17 | 高通股份有限公司 | Method and apparatus for image change detection |
CN111915505A (en) * | 2020-06-18 | 2020-11-10 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
-
2020
- 2020-05-13 CN CN202010405431.1A patent/CN111741214A/en active Pending
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103297702A (en) * | 2013-05-06 | 2013-09-11 | 中航华东光电有限公司 | Image processing device for aviation onboard helmet-mounted locating system and image processing method thereof |
CN104376545A (en) * | 2013-08-16 | 2015-02-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104735347A (en) * | 2013-12-24 | 2015-06-24 | 三星泰科威株式会社 | Autofocus adjusting method and apparatus |
CN103871070A (en) * | 2014-04-03 | 2014-06-18 | 深圳市德赛微电子技术有限公司 | Automatic calibration method of vehicle-mounted panoramic imaging system |
CN105608664A (en) * | 2014-11-19 | 2016-05-25 | 深圳市腾讯计算机系统有限公司 | Photo processing method and terminal |
WO2017000664A1 (en) * | 2015-06-30 | 2017-01-05 | 华为技术有限公司 | Photographing method and apparatus |
CN105303161A (en) * | 2015-09-21 | 2016-02-03 | 广东欧珀移动通信有限公司 | Method and device for shooting multiple people |
US20180352132A1 (en) * | 2017-05-31 | 2018-12-06 | Guangdong Oppo Mobile Telecommunications Corp., Lt D. | Image processing method and related products |
CN107770454A (en) * | 2017-09-05 | 2018-03-06 | 努比亚技术有限公司 | A kind of image processing method, terminal and computer-readable recording medium |
CN111433813A (en) * | 2017-12-07 | 2020-07-17 | 高通股份有限公司 | Method and apparatus for image change detection |
CN108332748A (en) * | 2017-12-18 | 2018-07-27 | 中国电子科技集团公司电子科学研究院 | A kind of indoor visible light localization method and device |
CN109978754A (en) * | 2017-12-28 | 2019-07-05 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN109035155A (en) * | 2018-06-15 | 2018-12-18 | 宁波大学 | A kind of more exposure image fusion methods of halation removal |
CN108900778A (en) * | 2018-06-27 | 2018-11-27 | 努比亚技术有限公司 | A kind of image pickup method, mobile terminal and computer readable storage medium |
CN108846377A (en) * | 2018-06-29 | 2018-11-20 | 百度在线网络技术(北京)有限公司 | Method and apparatus for shooting image |
CN109285190A (en) * | 2018-09-06 | 2019-01-29 | 广东天机工业智能系统有限公司 | Object positioning method, device, electronic equipment and storage medium |
CN109360176A (en) * | 2018-10-15 | 2019-02-19 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN109241956A (en) * | 2018-11-19 | 2019-01-18 | Oppo广东移动通信有限公司 | Method, apparatus, terminal and the storage medium of composograph |
CN111209775A (en) * | 2018-11-21 | 2020-05-29 | 杭州海康威视数字技术股份有限公司 | Signal lamp image processing method, device, equipment and storage medium |
CN110174093A (en) * | 2019-05-05 | 2019-08-27 | 腾讯科技(深圳)有限公司 | Localization method, device, equipment and computer readable storage medium |
CN110146869A (en) * | 2019-05-21 | 2019-08-20 | 北京百度网讯科技有限公司 | Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter |
CN110245611A (en) * | 2019-06-14 | 2019-09-17 | 腾讯科技(深圳)有限公司 | Image-recognizing method, device, computer equipment and storage medium |
CN110971822A (en) * | 2019-11-29 | 2020-04-07 | Oppo广东移动通信有限公司 | Picture processing method and device, terminal equipment and computer readable storage medium |
CN111127358A (en) * | 2019-12-19 | 2020-05-08 | 苏州科达科技股份有限公司 | Image processing method, device and storage medium |
CN111145135A (en) * | 2019-12-30 | 2020-05-12 | 腾讯科技(深圳)有限公司 | Image descrambling processing method, device, equipment and storage medium |
CN111915505A (en) * | 2020-06-18 | 2020-11-10 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112884689A (en) * | 2021-02-25 | 2021-06-01 | 景德镇陶瓷大学 | Highlight removing method for image on strong reflection surface |
CN112884689B (en) * | 2021-02-25 | 2023-11-17 | 景德镇陶瓷大学 | Method for removing high light of strong reflection surface image |
CN112967201A (en) * | 2021-03-05 | 2021-06-15 | 厦门美图之家科技有限公司 | Image illumination adjusting method and device, electronic equipment and storage medium |
CN113052056A (en) * | 2021-03-19 | 2021-06-29 | 华为技术有限公司 | Video processing method and device |
CN116051434A (en) * | 2022-07-22 | 2023-05-02 | 荣耀终端有限公司 | Image processing method and related electronic equipment |
CN116051434B (en) * | 2022-07-22 | 2023-11-14 | 荣耀终端有限公司 | Image processing method and related electronic equipment |
CN117928386A (en) * | 2024-03-22 | 2024-04-26 | 四川拓及轨道交通设备股份有限公司 | Portable binocular contact net geometric parameter detection system and method |
CN117928386B (en) * | 2024-03-22 | 2024-05-31 | 四川拓及轨道交通设备股份有限公司 | Portable binocular contact net geometric parameter detection system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109218628B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111741214A (en) | Image processing method and device and electronic equipment | |
US10997696B2 (en) | Image processing method, apparatus and device | |
CN108898567B (en) | Image noise reduction method, device and system | |
JP5871862B2 (en) | Image blur based on 3D depth information | |
CN110121882A (en) | A kind of image processing method and device | |
JP5589548B2 (en) | Imaging apparatus, image processing method, and program storage medium | |
EP3457683A1 (en) | Dynamic generation of image of a scene based on removal of undesired object present in the scene | |
CN109218627B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111586308B (en) | Image processing method and device and electronic equipment | |
CN107787463B (en) | The capture of optimization focusing storehouse | |
CN108632536B (en) | Camera control method and device, terminal and storage medium | |
WO2014187222A1 (en) | Photographing method, device and terminal | |
CN107231524A (en) | Image pickup method and device, computer installation and computer-readable recording medium | |
CN109361853B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN110650288B (en) | Focusing control method and device, electronic equipment and computer readable storage medium | |
CN108600610A (en) | Shoot householder method and device | |
CN105391940B (en) | A kind of image recommendation method and device | |
CN104394326A (en) | Photometry method and terminal | |
CN105190229A (en) | Three-dimensional shape measurement device, three-dimensional shape measurement method, and three-dimensional shape measurement program | |
WO2021218536A1 (en) | High-dynamic range image synthesis method and electronic device | |
CN112085686A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN116055607A (en) | Zoom smoothness evaluation method and device and electronic equipment | |
CN104954687B (en) | A kind of image generating method and device | |
CN105467741A (en) | Panoramic shooting method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201002 |
|
RJ01 | Rejection of invention patent application after publication |