CN115423692A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115423692A
CN115423692A CN202110605245.7A CN202110605245A CN115423692A CN 115423692 A CN115423692 A CN 115423692A CN 202110605245 A CN202110605245 A CN 202110605245A CN 115423692 A CN115423692 A CN 115423692A
Authority
CN
China
Prior art keywords
image
sub
region
preset
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110605245.7A
Other languages
Chinese (zh)
Inventor
霍文甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110605245.7A priority Critical patent/CN115423692A/en
Publication of CN115423692A publication Critical patent/CN115423692A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure relates to an image processing method, an image processing device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring an initial image of a subject; determining a preset sub-region in the initial image, wherein the initial image comprises a plurality of sub-regions obtained by dividing the initial image, and the preset sub-region is a sub-region with Moire patterns in the plurality of sub-regions; acquiring reference images of a plurality of frames of a shot object under different angles in the same shooting direction of an initial image; processing a preset subarea in the initial image according to a reference subarea in the reference image to obtain a target image; the reference sub-region is a sub-region which corresponds to the preset sub-region in the reference image and meets the preset condition. By using the method disclosed by the invention, when the moire fringes exist, the reference image can be collected in a physical mode and combined with an algorithm mode, the target image with the improved moire fringes is obtained through fusion, and the mode of improving or eliminating the moire fringes is simpler and more convenient.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of technology and the improvement of living standard, people have higher and higher requirements on the quality of images obtained by shooting through electronic equipment. In the shooting process, if the shot object has fine texture or high-frequency interference is generated when the spatial frequency of the shot object is close to that of the photosensitive element, the shot image generates water wave-like color stripes, namely Moire stripes. Moire has no obvious shape or regularity, and the presence of Moire severely affects image quality.
There is also a lack in the related art of a method effective for improving or removing moire.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method, apparatus, electronic device, and storage medium.
According to a first aspect of the embodiments of the present disclosure, an image processing method is provided, the method including:
acquiring an initial image of a subject;
determining a preset sub-region in the initial image, wherein the initial image comprises a plurality of sub-regions obtained by dividing the initial image, and the preset sub-region is a sub-region with Moire patterns in the plurality of sub-regions;
acquiring reference images of the shot object under different angles of multiple frames in the same shooting direction of the initial image;
processing the preset subarea in the initial image according to the reference subarea in the reference image to obtain a target image; the reference sub-region is a sub-region which corresponds to the preset sub-region in the reference image and meets a preset condition.
In some embodiments, the acquiring an initial image of a subject comprises:
acquiring motion data acquired by a preset sensor;
in response to the motion data being within a threshold range, determining that the electronic device is in a stationary state;
and acquiring the initial image in the static state.
In some embodiments, the acquiring, in the same shooting direction of the initial image, reference images of the subject at different angles over multiple frames includes:
an image sensor that controls an electronic device, the image sensor rotating a target angle range around a central axis of the image sensor;
and in the rotation process of the image sensor, acquiring the reference images of the shot object under different angles of multiple frames in the same shooting direction of the initial image.
In some embodiments, the acquiring multiple frames of reference images of the subject at different angles includes:
setting an angle every interval, and collecting a frame of reference image; or, a frame of reference image is collected at set time intervals.
In some embodiments, the acquiring multiple frames of reference images of the subject at different angles includes:
determining the target angle range according to the distribution area of the preset sub-area in the initial image;
and acquiring a frame of reference image within the reference angle range at set angles at intervals.
In some embodiments, the processing the preset sub-region in the initial image according to the reference sub-region in the reference image to obtain the target image includes:
determining that at least one frame of reference image of the reference sub-area exists in the plurality of frames of reference images;
respectively determining the similarity between the reference sub-region of each frame of the reference image and a preset sub-region of the initial image in the reference image with the reference sub-region;
and determining a reference image with the similarity in a reference range as a target reference image, and processing a preset sub-region of the initial image by using a reference sub-region of at least one frame of the target reference image to obtain the target image.
In some embodiments, the processing the preset sub-region of the initial image with the reference sub-region of the at least one frame of target reference image to obtain the target image includes:
acquiring the reference sub-region after registration in at least one frame of the target reference image according to the image angle of the preset sub-region;
and controlling the registered reference sub-region to replace the corresponding preset sub-region in the initial image, and synthesizing a target image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the acquisition module is used for acquiring an initial image of a shot object;
the determining module is used for determining a preset sub-region in the initial image, wherein the initial image comprises a plurality of sub-regions obtained by dividing the initial image, and the preset sub-region is a sub-region with Moire patterns in the plurality of sub-regions;
the acquisition module is used for acquiring multiple frames of reference images of the shot object under different angles in the same shooting direction of the initial image;
the processing module is used for processing the preset subarea in the initial image according to the reference subarea in the reference image to obtain a target image; the reference sub-region is a sub-region which corresponds to the preset sub-region in the reference image and meets a preset condition.
In some embodiments, the obtaining module is further configured to:
acquiring motion data acquired by a preset sensor;
in response to the motion data being within a threshold range, determining that the electronic device is in a stationary state;
and acquiring the initial image in the static state.
In some embodiments, the acquisition module is to:
an image sensor that controls an electronic device, the image sensor rotating a target angle range around a central axis of the image sensor;
and in the rotation process of the image sensor, acquiring the reference images of the shot object under different angles of a plurality of frames in the same shooting direction of the initial image.
In some embodiments, the acquisition module is further to:
setting angles at intervals, and collecting a frame of reference image; or, a frame of reference image is collected at set time intervals.
In some embodiments, the acquisition module is further to:
determining the target angle range according to the distribution area of the preset sub-area in the initial image;
and acquiring a frame of reference image within the target angle range at set angles at intervals.
In some embodiments, the processing module is further configured to:
determining at least one frame of the reference image with the reference sub-area in the multiple frames of the reference images;
respectively determining the similarity between the reference sub-region of each frame of the reference image and a preset sub-region of the initial image in the reference image with the reference sub-region;
and determining a reference image with the similarity in a reference range as a target reference image, and processing a preset subarea of the initial image by using a reference subarea of at least one frame of the target reference image to obtain the target image.
In some embodiments, the processing module is to:
acquiring the reference sub-region after registration in at least one frame of the target reference image according to the image angle of the preset sub-region;
and controlling the registered reference sub-region to replace the corresponding preset sub-region in the initial image, and synthesizing a target image.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the image processing method as described in any one of the above.
According to a fourth aspect of embodiments of the present disclosure, a non-transitory computer-readable storage medium is presented, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform an image processing method as described in any one of the above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: with the method of the present disclosure, when moire fringes are present, a multi-frame reference image can be acquired over a range of angles by physical means. And then, combining an algorithm mode, replacing the reference subarea meeting the condition in the reference image with a preset subarea with moire patterns, and finally synthesizing a target image for improving the moire patterns, wherein the mode for improving or eliminating the moire patterns is simpler and more convenient.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating a method according to an example embodiment.
FIG. 3 is a flow chart illustrating a method according to an example embodiment.
FIG. 4 is a flowchart illustrating a method in accordance with an example embodiment.
FIG. 5 is a flow chart illustrating a method according to an example embodiment.
FIG. 6 is a schematic diagram illustrating an initial image according to an example embodiment.
FIG. 7 is a schematic diagram of a reference image shown in accordance with an exemplary embodiment.
FIG. 8 is a schematic representation of a Moire pattern of an initial image shown in accordance with an example embodiment.
Fig. 9 is a block diagram illustrating an apparatus according to an example embodiment.
FIG. 10 is a block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
With the development of technology and the improvement of living standard, people have higher and higher requirements on the quality of images obtained by shooting through electronic equipment. In the shooting process, if the shot object has fine texture or high-frequency interference is generated when the spatial frequency of the shot object is close to that of the photosensitive element, the shot image generates water wave-like color stripes, namely Moire stripes. Moire has no obvious shape or regularity, and the presence of Moire severely affects image quality.
In order to eliminate moire, the related art generally employs the following method: firstly, a low-pass filter is arranged on a lens, and an image without moire fringes is obtained by shooting. Secondly, moire lines are eliminated at the later stage by using a software method such as Photoshop and the like. In the related art methods, the first method reduces the sharpness of an image, has a large use limitation, and is suitable for a camera but not for a device such as a commonly used mobile phone. In the second mode, the operation is complicated and complicated, and the technical requirement is high.
Methods for effectively improving moir e removal are also lacking in the related art.
In an embodiment of the present disclosure, an image processing method is provided, where the method includes: acquiring an initial image of a subject; determining a preset sub-region in an initial image, wherein the initial image comprises a plurality of sub-regions obtained by dividing the initial image, and the preset sub-region is a sub-region with Moire patterns in the plurality of sub-regions; collecting reference images of a plurality of frames of shot objects under different angles in the same shooting direction of the initial image; processing a preset subarea in the initial image according to a reference subarea in the reference image to obtain a target image; the reference sub-region is a sub-region which corresponds to a preset sub-region in the reference image and meets a preset condition. With the method of the present disclosure, when moire fringes are present, a multi-frame reference image can be acquired over a range of angles by physical means. And then, combining an algorithm mode, replacing the reference sub-region meeting the conditions in the reference image with the preset sub-region with the moire pattern, and finally synthesizing the target image for improving the moire pattern, wherein the mode for improving or eliminating the moire pattern is simpler and more convenient.
In an exemplary embodiment, the image processing method of the present embodiment is applied to an electronic device. Wherein, electronic equipment for example be cell-phone, panel computer, notebook computer, intelligent wearing equipment, equipment such as camera.
As shown in fig. 1, the method of the present embodiment may include the following steps:
s110, acquiring an initial image of the shot object.
And S120, determining a preset sub-area in the initial image.
And S130, acquiring reference images of the shot object under different angles of multiple frames in the same shooting direction of the initial image.
And S140, processing the preset sub-region in the initial image according to the reference sub-region in the reference image to obtain the target image.
The method of the embodiment of the present disclosure is applied to a camera application program of an electronic device, and therefore, before the embodiment is executed, the electronic device should open a camera, and the method is executed on an application interface of the camera.
In step S110, an initial image of a subject is acquired in conjunction with an operation instruction of a user (e.g., clicking a shutter). The acquired initial image is provided for a user to preview on an application interface of the camera.
In step S120, the preset operation instruction may be, for example: a click command to a "desmear" option within the application interface, or a "desmear" voice command.
In this step, the initial image includes a plurality of sub-regions obtained by dividing the initial image, for example, the initial image includes a plurality of sub-regions that are uniformly arranged. The sub-regions may be divided in various ways, such as dividing the initial image into m × n sub-regions according to a selection instruction of a user or according to a default setting of a program. The workload of calculation or matching should be reduced in the process of partitioning, and as an example shown in fig. 6 or fig. 8, the initial image includes 4 × 4 sub-regions.
In this step, it is determined whether each sub-region in the initial image has moire fringes, and the sub-region having moire fringes is recorded as a preset sub-region.
In one example, a deep learning neural network algorithm is utilized to determine whether moire exists in each subregion in the initial image. For example, a trained network model pre-stored in an AI chip of the electronic device may be used to input an initial image and output region information in which moire fringes exist in the initial image.
In another example, the number of edge pixel points in each sub-region in the initial image is respectively counted and compared with the edge pixel point threshold of the corresponding sub-region. When the edge pixel points exceed the threshold, the fact that the edge pixel points which do not belong to the edge of the shot object exist in the image is judged that Moire occurs.
In step S130, each frame of reference image corresponds to the initial image, for example, each frame of reference image is the same as the shooting area of the initial image for the shooting object, the shooting distance is the same, the image content is within the error range, and the like, each frame of reference image also includes a plurality of sub-regions that are uniformly arranged, and the sub-regions included in each frame of reference image correspond to the sub-regions included in the initial image one to one. For example, in the example shown in fig. 6 or fig. 8, the initial image includes 4 × 4 sub-regions, and as shown in fig. 7, each frame of the reference image includes 4 × 4 sub-regions, the image content of the sub-regions of the reference image should be correlated with the image content of the corresponding sub-regions of the initial image.
It is understood that the subject may remain stationary during the acquisition of the reference image of the subject within the target angular range. In the process of acquiring the reference image, the image obviously departing from the shot object can be processed by algorithms such as cutting, fusion and the like, so that the image content of each frame of reference image and the initial image in the acquired reference image is within an error range.
In step S140, the reference sub-region is a sub-region of the reference image that satisfies the predetermined condition, and corresponds to the predetermined sub-region, for example, the image content of the reference sub-region corresponds to the image content of the predetermined sub-region. The preset conditions are used for characterizing: the corresponding sub-regions contain no moire or the moire is not significant, e.g. the moire effect is below a threshold, or the moire effect is below the moire effect of the original image. That is, in this step, the sub-region in the reference image that corresponds to the preset sub-region (the sub-region in which the moire exists in the initial image) and satisfies the preset condition is regarded as the reference sub-region.
In this step, there may be moire in one or more predetermined sub-regions of the initial image. In the multi-frame reference image obtained in step S130, there may be a reference sub-region in one or more frames of the reference image. The processor can control the reference sub-regions in any frame of reference image to replace preset sub-regions in a one-to-one correspondence mode, and target images without moire or with unobvious moire are synthesized. Or controlling to select different reference sub-regions from the multi-frame reference image, replacing the preset sub-regions in a one-to-one correspondence manner, and synthesizing the target image without moire or with unobvious moire.
In an exemplary embodiment, as shown in fig. 2, step S110 of the present embodiment includes the following steps:
s1101, motion data collected by a preset sensor are obtained.
And S1102, in response to the motion data being in the threshold range, determining that the electronic equipment is in a static state.
And S1103, acquiring an initial image in a static state.
In step S1101, a preset sensor is, for example, an acceleration sensor, a gyroscope (gyro), or the like. Taking the preset sensor as a gyroscope, the motion data is, for example, average angular acceleration data of the electronic device. The electronic device has angular accelerations in the X direction, the Y direction, and the Z direction, and calculates average data as motion data based on the angular accelerations in the three directions.
In step S1102, the threshold range may be pre-stored in the electronic device and used to represent reference data when the electronic device is in a static state. The processor compares the acquired motion data with a threshold range, and when the motion data is within the threshold range, the processor determines that the electronic device is in a stationary state.
In step S1103, the static state may refer to a tripod mode of the electronic device, i.e., the electronic device is not moving, in a stable state or a static state. In the static state, the electronic equipment and the object can always keep the same opposite angle, such as keeping the electronic equipment and the A surface of the object opposite to each other. Thus, in acquiring the multi-frame reference image in step S130, multi-frame images of different angles of the a-plane of the subject are acquired. Furthermore, the structure of the B surface of the shot object is reduced, interference factors in the reference image are reduced, and the image processing difficulty and time consumption are reduced.
In this step, the processor of the electronic device may determine whether the electronic device is in a stationary state by combining the motion data detected by the sensor. If yes, after the initial image is obtained, continue to execute step S120 of this embodiment; if not, normal shooting is executed, and the shooting can be finished after the initial image is obtained (without eliminating moire fringes). It can be understood that the photographing mode of the embodiment for removing moire fringes takes time compared with the normal photographing mode.
In an exemplary embodiment, as shown in fig. 3, step S130 in this embodiment may include the following steps:
s1301, controlling an image sensor of the electronic device to rotate a target angle range around a central axis of the image sensor.
And S1302, in the rotation process of the image sensor, collecting multiple frames of reference images of the shot object under different angles in the same shooting direction of the initial image.
In step S1301, the image capture module of the electronic apparatus includes, for example: lens group, color filter and image sensor. In the imaging system of the image pickup assembly, the lens group can be regarded as the object side, and the image sensor can be regarded as the image side or the imaging surface. The image sensor is, for example, a CMOS (complementary metal oxide semiconductor) sensor.
The central axis of the image sensor refers to, for example: and the axis passing through the core of the image sensor is parallel to or coincident with the optical axis of the camera assembly. The target angle range is, for example, 0-180 or 0-360. The processor controls the driving assembly to drive the image sensor to rotate around the central shaft, and the time for rotating the target angle range of the image sensor can be set according to requirements.
In step S1302, during the rotation of the image sensor, the processor may issue a control instruction for acquiring reference images at different angles or moments, and control the camera module to acquire multiple frames of reference images at different angles.
In one example, one frame of reference image is acquired every set angle interval.
In this example, in the rotation process, the multi-frame reference images are acquired within the total angle range at intervals of the set angle. For example, when the image sensor rotates by 30 degrees, one frame of reference image is acquired, and after the image sensor rotates by 180 degrees, six frames of reference images can be acquired; after 360 ° of rotation, twelve reference images can be acquired.
In another example, one frame of reference image is acquired every set time interval.
In this example, in the rotation process, the multi-frame reference image is passed in the rotation process at intervals of a set time length. For example, the image sensor acquires one frame of reference image at an interval of 1s from the initial time of rotation until a preset number of frames of reference images are acquired. The predetermined number is, for example, 10.
In another example, the method may further include the steps of:
s1302-1, determining a target angle range according to a distribution area of the preset sub-area in the initial image.
In this step, for example, when the initial image is a human body image, the predetermined subregion (region where moire exists) is a face portion. The step can further narrow the rotating range within the target angle range, and determine the target angle range with the best human face partial image shooting effect.
Or, determining a distribution area of a preset subarea (an area with moire fringes) in the initial image, determining angle information of the moire fringes relative to a reference direction in the preset subarea, and determining a target angle range.
S1302-2, in the range of the target angle, setting an angle at intervals, and collecting a frame of reference image. In this step, the rotation range of the image sensor becomes the reference angle range, and the angle of the interval may be set to 1 °, for example. The rotating range is reduced, and the efficiency of acquiring the reference image is improved.
In an exemplary embodiment, as shown in fig. 4, step S140 in this embodiment may include the following steps:
s1401, at least one frame of reference image of a reference sub-area exists in the multi-frame reference images.
S1402, in the reference image with the reference sub-region, respectively determining the similarity between the reference sub-region of each frame of reference image and the preset sub-region of the initial image.
And S1403, determining a reference image with the similarity within the reference range as a target reference image, and processing a preset sub-region of the initial image by using a reference sub-region of at least one frame of the target reference image to obtain the target image.
In step S1401, in combination with the foregoing embodiments, each frame of the sub-regions in the reference image and the sub-regions in the initial image are in one-to-one correspondence. And respectively determining whether the sub-regions corresponding to the preset sub-regions in each frame of reference image meet preset conditions in the acquired multi-frame reference images. And the sub-region which corresponds to the preset sub-region and meets the preset condition is a reference sub-region. That is, in this step, it is determined whether or not each reference image has a reference sub-region.
For example, in combination with the example shown in fig. 6 or fig. 8, of the 4 × 4 sub-regions in the initial image, the preset sub-regions are P5 to P8. Then, in the multi-frame reference image, it is determined whether the P5 'to P8' sub-regions of each frame of reference image satisfy the preset condition, and if at least one sub-region exists in the P5 'to P8' sub-regions of each frame of reference image and satisfies the preset condition, at least one reference sub-region exists in the reference image.
In step S1402, the sub-region corresponding to the preset sub-region and satisfying the preset condition is a reference sub-region, and the reference image corresponding to the reference sub-region is: there is a reference image of the reference sub-region.
For example, in the example shown in fig. 6, if there are a plurality of preset sub-regions, there may be at least one reference sub-region in each frame of reference image. For example, the sub-regions P5 'to P7' in the first frame reference picture satisfy the predetermined condition, i.e. include the three reference sub-regions P5 'to P7'. The P8 'sub-region in the reference picture of the second frame satisfies the predetermined condition, i.e. includes the reference sub-region P8'. In the third frame reference picture, as shown in fig. 7, the P5 'to P8' sub-regions all satisfy the predetermined condition, i.e. include four reference sub-regions P5 'to P8'. And the first frame reference image, the second frame reference image and the third frame reference image all have reference sub-regions and are reserved for standby.
In this step, the reference image without the reference sub-region is removed.
For reference images where reference sub-regions exist, each reference image is matched with the initial image. The matching process may be: and identifying and comparing the similarity of the reference sub-region of the reference image and the preset sub-region of the initial image.
In step S1403, the target reference image is: in the matching process, the matched images have the similarity of the reference images in the reference range. When the similarity between the target reference image and the initial image is within the reference range, the following results are shown: the image of the object in each subarea view field of the reference image obtained after the CMOS is rotated has consistency with the image of the object in each subarea view field of the initial image. The target reference image obtained after rotation is a view of the subject from other angles.
And repairing the preset subarea of the initial image by using the reference subarea of the target reference image to obtain a target image without moire.
And when the similarity between the reference image and the initial image is out of the reference range, indicating that: the image in each partial area view field in the reference image is inconsistent with the image in each partial area view field in the initial image. For example, the P16 subregion in the initial image is the edge of the subject, and the P16' subregion in the reference image is the background image.
In the step, the target reference image is reserved, and the reference image which is not successfully matched is removed.
In this embodiment, as shown in fig. 5, step S1403 may include the following steps:
and S1403-1, acquiring the registered reference sub-region from at least one frame of target reference image according to the image angle of the preset sub-region.
And S1403-2, controlling the registered reference sub-region to replace a corresponding preset sub-region in the initial image, and synthesizing the target image.
In step S1403-1, the reference sub-region is acquired, for example, by cropping the reference image.
In a first example, when there is one preset sub-region, a reference sub-region is obtained in any frame of target reference image, and image registration is performed on the reference sub-region and the preset sub-region according to an image angle in the preset sub-region. For example, the acquired reference sub-region is turned by a suitable angle according to the image angle in the preset sub-region.
In the second example, when the preset sub-area is multiple, for example, the following two ways can be referred to:
the first mode is as follows: in a first target reference image, a plurality of registered reference sub-regions is determined.
In this way, a first target reference image including a plurality of reference sub-regions corresponding to the plurality of predetermined sub-regions one to one also exists in the target reference image, for example, as shown in fig. 6 or fig. 8, the P5 to P8 sub-regions in the initial image are all predetermined sub-regions, and as shown in fig. 7, the P5 'to P8' sub-regions in the set reference image are all reference sub-regions. The first target reference image is cropped to obtain a plurality of reference sub-regions. The way of registration can be seen in the first example above.
The second mode is as follows: and respectively determining the reference sub-regions in each frame of target reference image in the multi-frame target reference images to obtain a plurality of registered reference sub-regions.
In this way, in the target reference image, there is no single frame reference image containing the reference sub-regions whose number is completely consistent with the number of the preset sub-regions, and a way of cutting and splicing the reference sub-regions in a plurality of target reference images may be adopted. The way of registration can be seen in the first example described above.
It can be understood that the plurality of reference sub-regions correspond to the plurality of preset sub-regions one to one. For the second mode of the present example, when there is one reference sub-region corresponding to the same preset sub-region in different target reference images, only one of the different target reference images is selected to be cropped to obtain the corresponding reference sub-region, and the same reference sub-region does not need to be repeatedly obtained.
According to the image processing method in the embodiment of the disclosure, the effect of removing the moire pattern of the final synthetic image is realized by physically collecting the multi-frame reference image and performing matching calibration in the reference image, and the image is ensured not to be distorted. Compare in later stage software mode and handle the mole line, efficiency is higher, does benefit to and promotes user experience.
In an exemplary embodiment, the present disclosure further provides an image processing apparatus, as shown in fig. 9, the apparatus of the present embodiment includes: an acquisition module 110, a determination module 120, an acquisition module 130, and a processing module 140. The apparatus of the present embodiment is used to implement the method as shown in fig. 1. The acquiring module 110 is used for acquiring an initial image of a subject. The determining module 120 is configured to determine a preset sub-region in the initial image, where the initial image includes a plurality of sub-regions obtained by dividing the initial image, and the preset sub-region is a sub-region in which moire fringes exist in the plurality of sub-regions. The collecting module 130 is configured to collect multiple frames of reference images of the subject at different angles in the same shooting direction of the initial image. The processing module 140 is configured to process a preset subregion in the initial image according to the reference subregion in the reference image to obtain a target image; the reference sub-region is a sub-region which corresponds to the preset sub-region in the reference image and meets the preset condition.
In an exemplary embodiment, still referring to FIG. 9, the apparatus of the present embodiment is used to implement the method shown in FIG. 2. Wherein, the obtaining module 110 is further configured to: acquiring motion data acquired by a preset sensor; in response to the motion data being within the threshold range, it is determined that the electronic device is in a stationary state. In the static state, an initial image is acquired.
In an exemplary embodiment, still referring to FIG. 9, the apparatus of the present embodiment is used to implement the method shown in FIG. 3. Wherein, the collection module 130 is configured to: an image sensor controlling the electronic device to rotate a target angle range around a central axis of the image sensor; in the rotation process of the image sensor, multiple frames of reference images of a shot object under different angles are collected in the same shooting direction of an initial image. In this embodiment, the collecting module 130 is further configured to collect a frame of reference image every interval of the set angle; or, a frame of reference image is collected at set time intervals. Or, determining a target angle range according to the distribution position of the preset subarea in the initial image; and acquiring a frame of reference image within the range of the target angle at set angles at intervals.
In an exemplary embodiment, still referring to FIG. 9, the apparatus of the present embodiment is used to implement the method shown in FIG. 4. Wherein the processing module 140 is further configured to: determining at least one frame of reference image with a reference sub-region in the multi-frame reference images; respectively determining the similarity between the reference sub-region of each frame of reference image and a preset sub-region of an initial image in the reference image with the reference sub-region; and determining a reference image with the similarity in the reference range as a target reference image, and processing a preset subarea of the initial image by using a reference subarea of at least one frame of the target reference image to obtain a target image.
In an exemplary embodiment, still referring to FIG. 9, the apparatus of the present embodiment is used to implement the method shown in FIG. 5. Wherein the processing module 140 is configured to: acquiring a reference subregion after registration in at least one frame of target reference image according to an image angle of a preset subregion; and controlling the registered reference sub-region to replace the corresponding preset sub-region in the initial image, and synthesizing the target image.
Fig. 10 is a block diagram of an electronic device. The present disclosure also provides for an electronic device, for example, the device 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Device 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operation at the device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 506 provides power to the various components of device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the apparatus 500.
The multimedia component 508 includes a screen that provides an output interface between the device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 514 includes one or more sensors for providing various aspects of status assessment for the device 500. For example, the sensor component 514 may detect an open/closed state of the device 500, the relative positioning of components, such as a display and keypad of the device 500, the sensor component 514 may detect a change in position of the device 500 or a component of the device 500, the presence or absence of user contact with the device 500, orientation or acceleration/deceleration of the device 500, and a change in temperature of the apparatus 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Communications component 516 is configured to facilitate communications between device 500 and other devices in a wired or wireless manner. The device 500 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
A non-transitory computer readable storage medium, such as the memory 504 including instructions executable by the processor 520 of the device 500 to perform the method, is provided in another exemplary embodiment of the present disclosure. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. The instructions in the storage medium, when executed by a processor of the electronic device, enable the electronic device to perform the above-described method.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (16)

1. An image processing method, characterized in that the method comprises:
acquiring an initial image of a subject;
determining a preset sub-region in the initial image, wherein the initial image comprises a plurality of sub-regions obtained by dividing the initial image, and the preset sub-region is a sub-region with Moire patterns in the plurality of sub-regions;
acquiring reference images of the shot object under different angles of multiple frames in the same shooting direction of the initial image;
processing the preset subarea in the initial image according to the reference subarea in the reference image to obtain a target image; the reference sub-region is a sub-region which corresponds to the preset sub-region in the reference image and meets a preset condition.
2. The image processing method according to claim 1, wherein the acquiring an initial image of the subject comprises:
acquiring motion data acquired by a preset sensor;
in response to the motion data being within a threshold range, determining that the electronic device is in a stationary state;
and acquiring the initial image in the static state.
3. The image processing method according to claim 1, wherein the acquiring multiple frames of reference images of the subject under different angles in the same shooting direction of the initial image comprises:
an image sensor that controls an electronic device, the image sensor rotating a target angle range around a central axis of the image sensor;
and in the rotation process of the image sensor, acquiring the reference images of the shot object under different angles of a plurality of frames in the same shooting direction of the initial image.
4. The image processing method according to claim 3, wherein the acquiring multiple frames of reference images of the subject at different angles comprises:
setting angles at intervals, and collecting a frame of reference image; or, a frame of reference image is collected at set time intervals.
5. The image processing method according to claim 3, wherein the acquiring multiple frames of reference images of the subject at different angles comprises:
determining the target angle range according to the distribution area of the preset sub-area in the initial image;
and acquiring a frame of reference image within the target angle range at set angles at intervals.
6. The image processing method according to claim 1, wherein the processing the preset sub-region in the initial image according to a reference sub-region in the reference image to obtain a target image comprises:
determining that at least one frame of reference image of the reference sub-area exists in the plurality of frames of reference images;
respectively determining the similarity between the reference sub-region of each frame of the reference image and a preset sub-region of the initial image in the reference image with the reference sub-region;
and determining a reference image with the similarity in a reference range as a target reference image, and processing a preset sub-region of the initial image by using a reference sub-region of at least one frame of the target reference image to obtain the target image.
7. The image processing method according to claim 6, wherein the processing the predetermined sub-region of the initial image with the reference sub-region of the at least one frame of target reference image to obtain the target image comprises:
acquiring the reference sub-region after registration in at least one frame of the target reference image according to the image angle of the preset sub-region;
and controlling the registered reference sub-region to replace the corresponding preset sub-region in the initial image, and synthesizing a target image.
8. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring an initial image of a shot object;
the determining module is used for determining a preset sub-region in the initial image, wherein the initial image comprises a plurality of sub-regions obtained by dividing the initial image, and the preset sub-region is a sub-region with Moire patterns in the plurality of sub-regions;
the acquisition module is used for acquiring reference images of the shot object under different angles of multiple frames in the same shooting direction of the initial image;
the processing module is used for processing the preset subarea in the initial image according to the reference subarea in the reference image to obtain a target image; the reference sub-region is a sub-region which corresponds to the preset sub-region in the reference image and meets a preset condition.
9. The image processing apparatus of claim 8, wherein the obtaining module is further configured to:
acquiring motion data acquired by a preset sensor;
in response to the motion data being within a threshold range, determining that the electronic device is in a stationary state;
and acquiring the initial image in the static state.
10. The image processing apparatus of claim 8, wherein the acquisition module is configured to:
an image sensor that controls an electronic device, the image sensor rotating a target angle range around a central axis of the image sensor;
and in the rotation process of the image sensor, acquiring the reference images of the shot object under different angles of a plurality of frames in the same shooting direction of the initial image.
11. The image processing apparatus of claim 10, wherein the acquisition module is further configured to:
setting an angle every interval, and collecting a frame of reference image; or, a frame of reference image is collected at set time intervals.
12. The image processing apparatus of claim 10, wherein the acquisition module is further configured to:
determining the target angle range according to the distribution area of the preset sub-area in the initial image;
and acquiring a frame of reference image within the target angle range at set angles at intervals.
13. The image processing apparatus of claim 8, wherein the processing module is further configured to:
determining at least one frame of the reference image of the reference sub-area in a plurality of frames of the reference image;
respectively determining the similarity between the reference sub-region of each frame of the reference image and a preset sub-region of the initial image in the reference image with the reference sub-region;
and determining a reference image with the similarity in a reference range as a target reference image, and processing a preset subarea of the initial image by using a reference subarea of at least one frame of the target reference image to obtain the target image.
14. The image processing apparatus of claim 13, wherein the processing module is further configured to:
acquiring the reference sub-region after registration in at least one frame of the target reference image according to the image angle of the preset sub-region;
and controlling the registered reference sub-region to replace the corresponding preset sub-region in the initial image, and synthesizing a target image.
15. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the image processing method of any one of claims 1 to 7.
16. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of any of claims 1 to 7.
CN202110605245.7A 2021-05-31 2021-05-31 Image processing method, image processing device, electronic equipment and storage medium Pending CN115423692A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110605245.7A CN115423692A (en) 2021-05-31 2021-05-31 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110605245.7A CN115423692A (en) 2021-05-31 2021-05-31 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115423692A true CN115423692A (en) 2022-12-02

Family

ID=84230412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110605245.7A Pending CN115423692A (en) 2021-05-31 2021-05-31 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115423692A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291857A (en) * 2023-11-27 2023-12-26 武汉精立电子技术有限公司 Image processing method, moire eliminating equipment and moire eliminating device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291857A (en) * 2023-11-27 2023-12-26 武汉精立电子技术有限公司 Image processing method, moire eliminating equipment and moire eliminating device
CN117291857B (en) * 2023-11-27 2024-03-22 武汉精立电子技术有限公司 Image processing method, moire eliminating equipment and moire eliminating device

Similar Documents

Publication Publication Date Title
EP3010226B1 (en) Method and apparatus for obtaining photograph
US11368632B2 (en) Method and apparatus for processing video, and storage medium
CN108154465B (en) Image processing method and device
CN106484257A (en) Camera control method, device and electronic equipment
CN105282441B (en) Photographing method and device
CN108154466B (en) Image processing method and device
CN111741187B (en) Image processing method, device and storage medium
KR20220033402A (en) Photography method, photography apparatus, electronic device, and storage medium
CN105959594A (en) Metering method and device for photographic equipment
CN108010009B (en) Method and device for removing interference image
CN115423692A (en) Image processing method, image processing device, electronic equipment and storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
US11252341B2 (en) Method and device for shooting image, and storage medium
CN113315903B (en) Image acquisition method and device, electronic equipment and storage medium
CN111835977B (en) Image sensor, image generation method and device, electronic device, and storage medium
CN114422687A (en) Preview image switching method and device, electronic equipment and storage medium
CN114697517A (en) Video processing method and device, terminal equipment and storage medium
CN114390189A (en) Image processing method, device, storage medium and mobile terminal
CN114757866A (en) Definition detection method, device and computer storage medium
CN109447929B (en) Image synthesis method and device
CN113329220B (en) Image display processing method and device and storage medium
WO2023225910A1 (en) Video display method and apparatus, terminal device, and computer storage medium
CN114782623A (en) Object modeling method and device, electronic equipment and storage medium
CN117408868A (en) Image processing method and device, electronic equipment and storage medium
CN115883954A (en) Shooting adjustment method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination