CN117132479A - Moire pattern eliminating method, electronic device and readable storage medium - Google Patents

Moire pattern eliminating method, electronic device and readable storage medium Download PDF

Info

Publication number
CN117132479A
CN117132479A CN202310488968.2A CN202310488968A CN117132479A CN 117132479 A CN117132479 A CN 117132479A CN 202310488968 A CN202310488968 A CN 202310488968A CN 117132479 A CN117132479 A CN 117132479A
Authority
CN
China
Prior art keywords
image
size
target
moire
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310488968.2A
Other languages
Chinese (zh)
Inventor
卢佳欣
孙斌
宓振鹏
刘石磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310488968.2A priority Critical patent/CN117132479A/en
Publication of CN117132479A publication Critical patent/CN117132479A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N25/611Correction of chromatic aberration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method for eliminating mole patterns, electronic equipment and a readable storage medium, and belongs to the technical field of terminals. The method is applied to the electronic equipment, and comprises the following steps: determining at least one slice image of the first image, wherein the first image is an image with moire, the target elimination model is a pre-trained neural network model for eliminating the moire, and the network structure of the target elimination model is a coding-decoding structure under the condition that the size of the first image is larger than a first target size; eliminating mole patterns carried by each segmented image in at least one segmented image through a target elimination model to obtain at least one segmented image with mole patterns eliminated; a moire-removed target image is determined based on at least one moire-removed tile image. Under the condition of ensuring elimination of mole lines in images with various sizes, the application reduces the occupation of the target elimination model to the memory, thereby improving the operation efficiency of the electronic equipment.

Description

Moire pattern eliminating method, electronic device and readable storage medium
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a method for eliminating moire, an electronic device, and a readable storage medium.
Background
Along with the development of terminal technology, electronic devices are applied to aspects of life of users, and in the process of using the electronic devices, users may need to use cameras of the electronic devices to shoot contents displayed by display screens of other devices, and if sampling frequency of the cameras of the electronic devices is smaller than stripe change frequency of the shot display screens, mole stripes can be generated in images acquired by the electronic devices through the cameras. Since moire affects the display quality of an image, it is generally necessary to eliminate moire in an image.
Currently, in the case that moire in an image needs to be eliminated, the image with the moire may be input into a pre-trained neural network model for eliminating the moire, for example, the neural network model for eliminating the moire may be an open neural network exchange (Open Neural Network Exchange, onnx) model, through which the moire in the image may be eliminated.
However, in order to ensure that moire in images of various sizes can be eliminated, the neural network model for eliminating moire is usually a neural network model with a pyramid structure, and the neural network model with the pyramid structure occupies more memory, so that the operation efficiency of the electronic device is reduced.
Disclosure of Invention
The application provides a method for eliminating moire, electronic equipment and a readable storage medium, which can be used for solving the problem that the operation efficiency of the electronic equipment is low because a neural network model occupies more memory in the related technology. The technical scheme is as follows:
in a first aspect, a method for eliminating moire is provided, and the method is applied to an electronic device, and includes:
determining at least one sliced image of a first image, wherein the first image is an image with moire, the size of each sliced image in the at least one sliced image is the first target size, the first target size is the size of an image required to be input by a target elimination model, the target elimination model is a neural network model which is trained in advance and used for eliminating the moire, and the network structure of the target elimination model is an encoding-decoding structure;
eliminating the mole patterns carried by each segmented image in the at least one segmented image through the target elimination model to obtain at least one segmented image with the mole patterns eliminated;
and determining a target image for eliminating moire based on the at least one sliced image for eliminating moire.
In this way, in the case that the size of the first image is large and does not meet the size requirement of the target elimination model trained in advance, at least one segmented image of the first image can be determined, and since the size of the at least one segmented image meets the size requirement of the target elimination model, the moire in the at least one segmented image can be eliminated by the target elimination model. Since the elimination of the moire in the first image can be realized by determining at least one segmented image of the first image and eliminating the moire of each segmented image in the at least one segmented image under the condition that the size of the first image is large, the elimination of the moire in the first image can be ensured, and the elimination of the moire in images with various sizes can be ensured. And because the network structure of the target elimination model is an encoding-decoding structure, the network structure is simpler, and the occupied memory is small, thereby improving the operation efficiency of the electronic equipment.
As one example of the present application, the determining at least one tile image of the first image in the case that the size of the first image is larger than the first target size includes:
determining the number of slices for slicing the first image according to the size of the first image, the first target size and a preset overlapping size when the size of the first image is larger than the first target size;
Under the condition that the number of slices is larger than 1, cutting the first image according to the first target size, the preset overlapping size and the number of slices to obtain a plurality of slice images;
and under the condition that the number of slices is 1, resampling the first image to obtain a first reference image, wherein the size of the first reference image is the first target size, and the first reference image is a slice image of the first image.
Therefore, the processing mode of the first image can be accurately selected by determining the number of the slices of the first image, and the accuracy of processing the first image is improved.
As an example of the present application, in the case where the number of slices is greater than 1, performing a cutting operation on the first image according to the first target size, the preset overlap size, and the number of slices, to obtain a plurality of slice images, including:
determining a second target size according to the first target size, the preset overlapping size and the number of slices when the number of slices is larger than 1, wherein the second target size is a size which does not need to be resampled and is larger than the first target size;
Resampling the first image to obtain a second reference image, wherein the size of the second reference image is the second target size under the condition that the size of the first image is not the second target size;
and cutting the second reference image according to the first target size, the preset overlapping size and the slicing number to obtain a plurality of slicing images.
Therefore, under the condition that the size of the first image is not the second target size, the first image is resampled, so that the first image can be completely cut, and the image information of the first image is prevented from being lost to the greatest extent.
As one example of the present application, the determining at least one tile image of the first image in the case that the size of the first image is larger than the first target size includes:
under the condition that the size of the first image is larger than the first target size, cutting the first image according to the first target size and a preset overlapping size to obtain at least one third reference image, wherein the size of the at least one third reference image is not larger than the first target size;
Resampling a third reference image having a size smaller than the first target size in the case that a third reference image having a size smaller than the first target size exists in the at least one third reference image, so that the size of the resampled third reference image is the first target size;
and determining the non-resampled third reference image and the resampled third reference image as the slice images of the first image, and obtaining the at least one slice image.
Therefore, the first image is cut directly according to the first target size and the preset overlapping size, so that the calculation complexity is reduced, and the cutting efficiency is improved. In addition, the third reference image with the size smaller than the first target size is resampled, so that the first image can be ensured to be cut and complete image information can be kept, loss of the image information is avoided, and the accuracy of image cutting is improved.
As an example of the present application, the determining a target image for eliminating moire based on the at least one slice image for eliminating moire includes:
determining the segmented image with moire elimination as the target image under the condition that the number of the segmented images with moire elimination is 1;
And under the condition that the number of the at least one segmented image with the moire elimination is a plurality of segmented images with the moire elimination, fusing the plurality of segmented images with the moire elimination to obtain the target image.
In this way, the target image is determined in different ways under the condition that the number of at least one segmented image for eliminating moire is different, so that the reliability of determining the target image is improved.
As an example of the present application, in the case where the number of the at least one moire-removed tile images is plural, fusing the plural moire-removed tile images to obtain the target image includes:
determining a cutting position of each of the plurality of segmented images when the number of the at least one segmented image with moire eliminated is a plurality of;
determining a fusion matrix corresponding to each segmented image according to the corresponding cutting position of each segmented image, wherein the fusion matrix is used for enabling the overlapped part in the corresponding segmented image to generate color gradient;
multiplying each segmented image with a corresponding fusion matrix to obtain a plurality of segmented images with gradually changed colors;
and fusing the plurality of color-graded segmented images according to the cutting position of each segmented image to obtain the target image.
Therefore, at least one segmented image with moire eliminated is fused through the fusion matrix, so that obvious boundary sense in the fused image is avoided, and the image display quality is improved.
As an example of the present application, the method further comprises:
resampling the first image to obtain a fourth reference image with the size of the first target size under the condition that the size of the first image is smaller than the first target size;
and eliminating the mole marks of the fourth reference image through the target elimination model to obtain the target image.
Therefore, under the condition that the size of the first image is smaller than the first target size, resampling is carried out on the first image, so that the size of the obtained fourth reference image meets the size requirement of the target elimination model, elimination of the mole marks of images with various sizes is guaranteed, and reliability of eliminating the mole marks of the images is improved.
As an example of the present application, the method further comprises:
and in the case that the size of the first image is not the first target size, performing filtering processing on the first image before determining at least one slice image of the first image.
Therefore, the first image is subjected to low-pass filtering processing, so that the frequency aliasing condition in the first image is avoided from being emphasized when the first image is resampled later, and further, after the mole patterns carried by the first image are eliminated through the target elimination model, a better mole pattern removing effect can be achieved.
As an example of the present application, before the removing, by the target removing model, the moire carried by each of the at least one tile image, and obtaining at least one tile image with moire removed, the method further includes:
acquiring training data, wherein the training data comprises a plurality of negative sample images and a plurality of positive sample images, each negative sample image in the plurality of negative sample images carries mole marks, each positive sample image in the plurality of positive sample images does not carry mole marks, and the plurality of negative sample images are in one-to-one correspondence with the plurality of positive sample images;
and carrying out iterative training on an initial elimination model based on the training data to obtain the target elimination model, wherein the network structure of the initial elimination model is the coding-decoding structure.
Therefore, the target elimination model with the network structure being the encoding-decoding structure is simple in structure, occupies small memory, can be deployed in the electronic equipment, and ensures the operation efficiency of the electronic equipment.
As an example of the present application, the acquiring training data includes:
acquiring the plurality of negative sample images, wherein each negative sample image in the plurality of negative sample images is obtained by shooting the display content of a corresponding display screen;
for each negative sample image in the plurality of negative sample images, acquiring a screen capturing image of display content of a display screen corresponding to each negative sample image;
performing topology transformation on the screen capturing image corresponding to each negative sample image to obtain a topology transformation screen capturing image corresponding to each negative sample image, wherein the data characteristics of the topology transformation screen capturing image are the same as the data characteristics of the corresponding negative sample image;
and migrating the color characteristics of each negative sample image into the corresponding topological transformation screen capturing image to obtain a positive sample image corresponding to each negative sample image.
Therefore, the colors of each negative sample image and the corresponding positive sample image are unified, so that the pressure for image brightness learning in the model training process is reduced, the model can be focused on the removal of learning moire, the change of the image color in the subsequent moire elimination process is avoided, and the damage to image information is reduced.
As one example of the present application, the determining at least one tile image of the first image in the case that the size of the first image is larger than the first target size includes:
displaying the first image;
receiving a first user operation on the first image;
in response to the first user operation, at least one tile image of the first image is determined in the event that the size of the first image is greater than the first target size.
In this way, by responding to the first user operation triggered by the user to perform the operation of eliminating the moire in the first image, the interactivity with the user is improved.
In a second aspect, an electronic device is provided, where the electronic device includes a processor and a memory, where the memory is configured to store a program for supporting the electronic device to execute the method for eliminating moire provided in the first aspect, and store data related to implementing the method for eliminating moire described in the first aspect. The processor is configured to execute a program stored in the memory. The electronic device may further comprise a communication bus for establishing a connection between the processor and the memory.
In a third aspect, there is provided a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method of moire elimination described in the first aspect above.
In a fourth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of moire elimination as described in the first aspect above.
The technical effects obtained by the second, third and fourth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described in detail herein.
Drawings
FIG. 1 is a schematic diagram of a display screen displaying content through discrete displayed pixels according to an embodiment of the present application;
fig. 2 is a schematic diagram of a camera capturing images in a discrete capturing manner according to an embodiment of the present application;
FIG. 3 is a schematic view of an image with moire patterns according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a neural network model with a pyramid structure according to an embodiment of the present application;
fig. 5 is a schematic diagram of an application scenario provided in an embodiment of the present application;
Fig. 6 is a schematic diagram of another application scenario provided in an embodiment of the present application;
FIG. 7 is a block diagram of a software system of an electronic device provided by an embodiment of the present application;
FIG. 8 is a schematic flow chart of a method for eliminating Moire patterns according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a preset overlap size according to an embodiment of the present application;
FIG. 10 is a schematic diagram showing different fusion matrices by different binary images according to an embodiment of the present application;
FIG. 11 is a schematic diagram showing a comparison of fusing tile images in different ways according to an embodiment of the present application;
FIG. 12 is a schematic diagram showing the effect of low-pass filtering on moire according to an embodiment of the present application;
FIG. 13 is a schematic flow chart of another method for eliminating Moire patterns according to the present application;
FIG. 14 is a flow chart of a model training method according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a negative sample image of different depth information provided by an embodiment of the present application;
FIG. 16 is a schematic diagram of a process flow of a screen capturing image corresponding to each negative image according to an embodiment of the present application;
FIG. 17 is a graph showing the effect of influence on image information in the moire removal process according to the embodiment of the present application;
FIG. 18 is a schematic flow chart of another method for eliminating Moire patterns according to the present application;
FIG. 19 is a schematic flow chart of another method for eliminating Moire patterns according to the present application;
FIG. 20 is a flow chart of another model training method according to an embodiment of the present application;
fig. 21 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that references to "a plurality" in this disclosure refer to two or more. In the description of the present application, "/" means or, unless otherwise indicated, for example, A/B may represent A or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in order to facilitate the clear description of the technical solution of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and function. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In one scenario, when using an electronic device such as a mobile phone, a user may need to use a camera of the electronic device to photograph content displayed on other display screens, since the display screens display content through pixels that are displayed discretely (as illustrated in fig. 1, for example), and the camera captures images through discrete samples, for example, see fig. 2 (a) or fig. 2 (b), and the camera captures images in discrete form after photographing the scene. In this way, if the sampling frequency of the camera of the electronic device is smaller than the stripe change frequency of the photographed display screen, moire (for example, as shown in fig. 3) will be generated in the image collected by the electronic device through the camera, and the moire will affect the display quality of the image, so that it is generally required to eliminate the moire in the image.
Currently, in the case that the moire in the image needs to be eliminated, the image with the moire may be input into a pre-trained neural network model for eliminating the moire, for example, the neural network model for eliminating the moire may be an onnx model, and the moire in the image may be eliminated through the onnx model.
However, in order to ensure that moire patterns of various sizes can be eliminated, the neural network model for eliminating the moire patterns is generally a neural network model of a pyramid structure (exemplarily, as shown in fig. 4), and the neural network model of the pyramid structure occupies more memory, which reduces the operation efficiency of the electronic device. If the neural network model with the pyramid structure is arranged at the cloud, although the problem of low operation efficiency is solved, the neural network model is required to be used in a networking manner in the use process, is inconvenient to use, and can possibly cause leakage of user privacy.
In order to ensure that the moire in images with various sizes can be eliminated, and also to reduce the memory occupied by a neural network model so as to improve the operation efficiency of electronic equipment, the embodiment of the application provides a moire elimination method, in which, in the case that the size of a first image is larger and does not meet the size requirement of a pre-trained target elimination model, at least one slice image of the first image can be determined, and the size of the at least one slice image meets the size requirement of the target elimination model, wherein the network structure of the target elimination model is a coding-decoding structure, and the moire in the at least one slice image can be eliminated through the target elimination model. Since the elimination of the moire in the first image can be realized by determining at least one segmented image of the first image and eliminating the moire of each segmented image in the at least one segmented image under the condition that the size of the first image is large, the elimination of the moire in the first image can be ensured, and the elimination of the moire in images with various sizes can be ensured. And because the network structure of the target elimination model is an encoding-decoding structure, the network structure is simpler, and the occupied memory is small, thereby improving the operation efficiency of the electronic equipment. And the target elimination model is simple in structure, so that the target elimination model does not need to be deployed in a cloud end, and leakage of user privacy is avoided.
In order to facilitate understanding, before describing the method for eliminating moire provided by the embodiment of the present application in detail, an application scenario related to the embodiment of the present application is described next by taking an electronic device as an example of a mobile phone.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating an application scenario according to an exemplary embodiment. In one possible scenario, the user may need to take a picture of the content displayed on the display screen of the other electronic device during the use of the mobile phone, for example, take a picture of the text displayed on the display screen, in which case, referring to fig. 5 (a), the user may collect the content displayed on the display screen through the camera of the mobile phone, and if the sampling frequency of the camera of the mobile phone is less than the stripe change rate of the display screen, referring to fig. 5 (b), the shot image displayed on the display screen of the mobile phone will have moire (in the drawing of the embodiment of the present application, the moire is represented by the stripe pattern). In order to eliminate moire in the photographed image, the user may click on a "moire elimination" button displayed on the current interface; the mobile phone responds to clicking operation of a 'moire elimination' button, and resamples and/or cuts the shot image to determine at least one sliced image of the shot image when the size of the shot image is larger than a first target size; eliminating mole patterns in each of at least one segmented image through a target elimination model obtained through pre-training to obtain at least one segmented image with mole patterns eliminated; then, based on at least one of the segmented images with moire removed, a photographed image with moire removed as shown in the (c) diagram in fig. 5 is displayed.
It should be noted that the first target size is a size of an image required to be input by the target elimination model, and the first target size may be preset according to requirements, where the first target size may be 1024 (number of pixels per row) ×1024 (number of pixels per column), 2048×1024 or 2048×2048, and the embodiment of the present application is illustrated by taking the first target size as 1024×1024.
In another application scenario, after the mobile phone obtains the photographed image with the moire removed, the photographed image with the moire removed and the photographed image with the moire removed may be stored.
In another application scenario, if the mobile phone collects the shot image, it can also automatically identify whether the shot image carries moire, and if the shot image carries moire, the mobile phone can automatically eliminate the moire in the shot image through the target elimination model according to the mode described in the scenario of fig. 5 without manual triggering by a user, so as to obtain the shot image with moire eliminated.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating an application scenario according to an exemplary embodiment. In one possible scenario, after a user obtains a plurality of images with moire patterns through a mobile phone, the moire patterns in the plurality of images can be eliminated in batch. For example, referring to fig. 6 (a), in the case that the mobile phone displays a picture selection interface, the user may select an image in which moire needs to be eliminated; in response to a selection operation by the user, referring to fig. 6 (b), a selection mark corresponding to an image selected by the user is highlighted (the selection mark is rounded in the drawing of the embodiment of the present application, and the circle black represents the highlighting of the selection mark). After the user finishes selecting, clicking a 'mole pattern elimination' button; the mobile phone responds to clicking operation of the 'moire eliminating' button, and can sequentially eliminate moire in the selected images through the target elimination model. Wherein, for each selected image, resampling and/or cutting the selected image to determine at least one tile image of the selected image if the size of the selected image is greater than the first target size; then eliminating the mole patterns in each of at least one segmented image through a target elimination model to obtain at least one segmented image with the mole patterns eliminated; thereafter, based on at least one moire-removed tile image, a plurality of moire-removed images as shown in fig. 6 (c) are displayed.
In the embodiment of the present application, the application scenario shown in fig. 5 or fig. 6 is merely taken as an example and is not limited to the embodiment of the present application.
The software system of the electronic device 100 will be described next.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, an Android (Android) system with a layered architecture is taken as an example, and a software system of the electronic device 100 is illustrated.
Fig. 7 is a block diagram of a software system of the electronic device 100 according to an embodiment of the present application. Referring to fig. 7, the hierarchical architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run time) and system layer, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 7, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 7, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like. The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data, which may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc., and make such data accessible to the application. The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to construct a display interface for an application, which may be comprised of one or more views, such as a view that includes displaying a text notification icon, a view that includes displaying text, and a view that includes displaying a picture. The telephony manager is used to provide communication functions of the electronic device 100, such as management of call status (including on, off, etc.). The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like. The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. For example, a notification manager is used to inform that the download is complete, a message alert, etc. The notification manager may also be a notification that appears in the system top status bar in the form of a chart or a scroll bar text, such as a notification of a background running application. The notification manager may also be a notification that appears on the screen in the form of a dialog window, such as a text message being prompted in a status bar, a notification sound being emitted, the electronic device vibrating, a flashing indicator light, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system. The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules, such as: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc. The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the electronic device 100 software and hardware is illustrated below in connection with capturing a photo scene.
When touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the original input event. Taking the touch operation as a click operation, the control corresponding to the click operation is a control of a camera application icon as an example, the camera application calls an interface of an application program framework layer, starts the camera application, calls a kernel layer to start a camera driver, and captures a still image or video through a camera 193.
Based on the execution body and the application scenario provided in the above embodiment, the method for eliminating moire provided in the embodiment of the present application is described next. Referring to fig. 8, fig. 8 is a flow chart of a method for eliminating moire according to an exemplary embodiment. By way of example, but not limitation, the electronic device may include not only the modules shown in fig. 7, but also other modules, for example, a camera, a moire eliminating module, and a multimedia database, where fig. 8 is applied to the electronic device, and is illustrated by taking, as an example, a camera, a gallery, a moire eliminating module, and a multimedia database interaction implementation included in the electronic device, where the method may include some or all of the following:
Step 801: and the camera receives a starting operation triggered by a user.
In this case, the user may trigger the starting operation of the camera, so that the camera may receive the starting operation triggered by the user.
For example, the operation of opening the camera application by the user may be a start operation received by the camera. Or, in the process of using the social application program, the user may need to use the camera of the electronic device to acquire the content displayed in the display screen of other devices, and in this case, the user may trigger the photographing function in the display interface of the social application program, where the triggering operation of the photographing function by the user is the starting operation that can be received by the camera.
Step 802: in response to the start operation, the camera is started.
Step 803: the camera receives the acquisition operation.
After the camera is started, a user can trigger the acquisition operation, so that the camera can receive the acquisition operation.
Step 804: the camera responds to the acquisition operation to acquire a first image and sends the first image to the gallery.
Step 805: and the gallery receives the first image sent by the camera and displays the first image.
Step 806: the gallery receives a user-triggered moire-elimination operation.
Because the camera shoots the contents displayed by other display screens, the acquired first image is likely to have mole marks, and if the first image has mole marks, a user can trigger the mole mark elimination operation, so that the gallery can receive the mole mark elimination operation.
Step 807: the gallery sends the first image to the moire abatement module in response to the moire abatement operation.
The moire elimination module includes a target elimination model, which is a neural network model trained in advance for eliminating moire, so in order to eliminate moire in the first image, the gallery may send the first image to the moire elimination module.
As one example, the network structure of the target elimination model is an encoding-decoding structure.
It is worth to say that, because the target elimination model has a simple structure and occupies little memory, the operation efficiency of the electronic equipment is ensured.
It should be noted that, the training process of the target elimination model may refer to the method shown in fig. 14 below, which is not described in detail in the embodiment of the present application.
Step 808: the moire cancellation module receives a first image.
Step 809: the moire removal module determines a size of the first image.
Because the target elimination model has a certain requirement on the size of the input image, if the size of the first image does not meet the size required by the target elimination model, the moire elimination module needs to process the first image, and if the size of the first image meets the size required by the target elimination model, the moire carried in the first image can be eliminated directly through the target elimination model. Therefore, in order to determine the operation of the moire canceling module, the moire canceling module needs to determine the size of the first image.
Step 810: in the case where the size of the first image is greater than the first target size, the moire elimination module determines at least one tile image of the first image.
The size of each of the at least one tile image is a first target size, where the first target size is a size of an image that the target cancellation model requires to input, and the first target size can be set in advance according to the requirement.
As one example, in the case where the size of the first image is larger than the first target size, the moire eliminating module can determine at least one tile image of the first image in different ways, and explanation will be made by taking the following two ways as examples.
In one possible implementation, the moire elimination module determines that the at least one tile image of the first image includes: determining the number of slices for slicing the first image according to the size of the first image, the first target size and a preset overlapping size under the condition that the size of the first image is larger than the first target size; under the condition that the number of slices is larger than 1, cutting the first image according to the first target size, the preset overlapping size and the number of slices to obtain a plurality of slice images; and under the condition that the slice number is 1, resampling the first image to obtain a first reference image, wherein the size of the first reference image is a first target size, and the first reference image is a slice image of the first image.
In the case where the size of the first image is greater than the first target size, the size of the first image may be slightly greater than the first target size, in which case the size of the first image is insufficient to support the first image being cut, but the first image needs to be resampled; of course, the size of the first image may also be larger and can support the first image to be cut. Therefore, in order to determine the processing manner of the first image, in the case that the size of the first image is larger than the first target size, the moire eliminating module may determine the number of slices for slicing the first image according to the size of the first image, the first target size and the preset overlap size.
It should be noted that the preset overlap size may be preset according to the requirement, and the preset overlap size is related to the first target size, that is, in the case where the overlap occurs in the lateral direction (or referred to as the horizontal direction), the preset overlap size is the same as the longitudinal size of the first target size in the longitudinal direction (or referred to as the vertical direction), in this case, the preset overlap size may be 1024 (the same as the longitudinal size of the first target size) ×128 (the lateral overlap width) or 1024×256, or the like; in the case of overlapping in the longitudinal direction, the preset overlapping dimension is the same as the transverse dimension of the first target dimension in the transverse direction, and in this case, the preset overlapping dimension may be 128 (longitudinal overlapping width) x 1024 (the same as the transverse dimension of the first target dimension), 256 x 1024, or the like. For example, in order to facilitate understanding of the setting of the preset overlap size in the case of overlapping in different directions, an embodiment of the present application provides a schematic diagram of the preset overlap size, please refer to fig. 9.
Since the lateral length and the longitudinal length of the first image are not necessarily the same, so the number of times the moire eliminating module cuts in the lateral direction and the number of times the moire eliminating module cuts in the longitudinal direction of the first image are not necessarily the same, that is, the number of lateral cuts and the number of longitudinal cuts of the first image are not necessarily the same, as an example, the moire eliminating module may determine the number of lateral cuts and the number of longitudinal cuts of the first image according to the size of the first image, the first target size and the preset overlap size, respectively; the number of transverse cuts is multiplied by the number of longitudinal cuts to obtain the number of slices for slicing the first image.
In some embodiments, the moire canceling module determines the number of lateral cuts of the first image according to a lateral length of the first image, a lateral length in the first target size, and a lateral overlap length in the preset overlap size by a first formula described below.
In the first formula (1) above,for the number of transverse cuts, N row0 For the lateral length (expressed by the number of lateral pixels) of the first image, overlap 0 For a lateral overlap length (expressed by the number of lateral pixels) in a preset overlap size, patchsize 0 Is the lateral length (expressed by the number of lateral pixels) in the first target dimension.
As an example, the moire removing module may determine the number of longitudinal cuts of the first image by the above-described first formula, and the moire removing module may determine the number of longitudinal cuts of the first image by the above-described first formula according to the longitudinal length of the first image, the longitudinal overlap length in the preset overlap size, and the longitudinal length in the first target size. And in the case where the number of longitudinal cuts of the first image is determined by the first formula, each parameter in the first formula is a longitudinally related parameter.
In some embodiments, in the case where the first size is greater than the first target size and the first size is greater than the size threshold, the moire eliminating module may determine the number of transverse cuts of the first image according to the first formula described above, or may determine the number of transverse cuts of the first image according to the second formula described below, which is not particularly limited in this embodiment of the present application.
In the second formula (2), the first formula,for the number of transverse cuts, N row0 For the lateral length of the first image, overlap 0 To preset the lateral overlap length in the overlap size, the patch size 0 Is the lateral length in the first target dimension.
Similarly, the moire eliminating module can determine the number of longitudinal cuts of the first image through the second formula, and in the case of determining the number of longitudinal cuts of the first image through the second formula, each parameter in the second formula is a longitudinally related parameter.
It should be noted that the size threshold may be preset according to requirements, for example, the size threshold may be 3000 pixels, 4000 pixels, 5000 pixels, or the like.
As an example, in the above second formula (2), the moire elimination module performs "N row0 The operation of/2 "may refer to the moire cancellation module sampling the lateral dimension of the first image to N row0 /2. Of course, a simple calculation is also possible.
In some embodiments, in a case where the number of slices is greater than 1, performing a cutting operation on the first image according to the first target size, the preset overlap size, and the number of slices, the operation of obtaining the plurality of slice images includes: under the condition that the number of the slices is larger than 1, determining a second target size according to the first target size, the preset overlapping size and the number of the slices, wherein the second target size is a size which does not need resampling, and the second target size is larger than the first target size; resampling the first image to obtain a second reference image, wherein the size of the second reference image is the second target size; and cutting the second reference image according to the first target size, the preset overlapping size and the number of slices to obtain a plurality of slice images.
Since the size of the first image may not be able to be cut into an integer number of slice images exactly according to the first target size and the preset overlap size in the case where the number of slices is greater than 1, in which case the first image needs to be resampled, the moire removal module may determine the second target size according to the first target size, the preset overlap size and the number of slices in order to determine whether the first image needs to be resampled.
As one example, the moire cancellation module may determine the lateral length in the second target size based on the lateral length of the first target size, the lateral overlap length of the preset overlap size, and the number of lateral slices; and determining the longitudinal length in the second target size according to the longitudinal length of the first target size, the longitudinal overlapping length of the preset overlapping size and the number of longitudinal slices.
As one example, the moire eliminating module may determine the lateral length in the second target size according to the lateral length of the first target size, the lateral overlap length of the preset overlap size, and the number of lateral slices by the following third formula.
In the third formula (3), N row1 For the lateral length of the second target dimension,to the number of cross slices, patchsize 0 For the lateral length of the first target dimension, overlap 0 For a preset overlap sizeWidth of the lateral overlap.
As an example, the moire elimination module may also determine the longitudinal length in the second target size by the third formula described above, and in the case of determining the longitudinal length in the second target size by the third formula, each parameter in the third formula is a longitudinally related parameter.
In some embodiments, the moire cancellation module resamples the first image if the size of the first image is not the second target size comprises: downsampling the first image if the size of the first image is greater than the second target size; the first image is upsampled if the size of the first image is smaller than the second target size.
In another possible implementation, the moire elimination module determines the at least one tile image of the first image if the size of the first image is greater than the first target size comprises: under the condition that the size of the first image is larger than the first target size, cutting the first image according to the first target size and the preset overlapping size to obtain at least one third reference image, wherein the size of the at least one third reference image is not larger than the first target size; resampling the third reference image having a size smaller than the first target size such that the resampled third reference image has a size of the first target size in the case that the third reference image having a size smaller than the first target size exists in the at least one third reference image; and determining the non-resampled third reference image and the resampled third reference image as the sliced image of the first image to obtain at least one sliced image.
Since the first image may be just capable of being segmented into an integer number of segmented images according to the first target size and the preset overlap size when the first image is segmented, the first image may not be cut into an integer number of segmented images according to the first target size after the first image is segmented according to the first target size and the preset overlap size. In this case, there will be a third reference image of a size smaller than the first target size in the at least one third reference image, and therefore, in the case where there is a third reference image of a size smaller than the first target size in the at least one third reference image, the moire eliminating module may resample the third reference image of a size smaller than the first target size in order to avoid missing image information.
It should be noted that resampling the third reference image having a size smaller than the first target size refers to upsampling the third reference image having a size smaller than the first target size.
It is worth to say that, through resampling the third reference image with the size smaller than the first target size, the first image can be ensured to be cut and still maintain complete image information, the loss of the image information is avoided, and the accuracy of image cutting is improved.
As an example, in case there is no third reference image of a size smaller than the first target size among the at least one third reference image, i.e. in case the sizes of the at least one third reference image are all the first target size, the at least one third reference image is determined as at least one slice image of the first image.
Step 811: and the moire eliminating module eliminates the moire carried by each segmented image in at least one segmented image through the target eliminating model to obtain at least one segmented image for eliminating the moire.
As one example, the moire elimination module can sequentially input each of the at least one tile image to the target elimination model to eliminate the moire carried by each tile image.
Because the target elimination model is trained in advance, the mole patterns of each segmented image can be eliminated accurately through the target elimination model, and the display quality of each segmented image is improved.
Step 812: the moire elimination module determines a target image to eliminate moire based on at least one sliced image to eliminate moire.
As can be seen from the above, the number of at least one tile image of the first image may be 1 or more, and therefore, the number of at least one tile image for eliminating moire may be 1 or more, and the mode of determining the target image for eliminating moire by the moire elimination module may be different according to the difference in the number of tile images for eliminating moire.
In some embodiments, in the case that the number of at least one moire-removed tile image is 1, determining the moire-removed tile image as the target image; and under the condition that the number of at least one segmented image for eliminating moire is a plurality of segmented images, fusing the plurality of segmented images for eliminating moire to obtain a target image.
Since it is described that the first image is cut before the moire is eliminated in the case where the number of the piece images to eliminate the moire is plural, the moire elimination module needs to fuse the plural piece images to eliminate the moire in order to obtain the complete target image.
As an example, in a case where the number of at least one moire-removed tile image is plural, the moire-removing module fuses the plural moire-removed tile images, and the operation of obtaining the target image includes: determining a cutting position of each of the plurality of segmented images when the number of at least one segmented image with moire eliminated is a plurality of; determining a fusion matrix corresponding to each segmented image according to the corresponding cutting position of each segmented image, wherein the fusion matrix is used for enabling the overlapping part in the corresponding segmented image to generate color gradient; multiplying each segmented image with a corresponding fusion matrix to obtain a plurality of segmented images with gradient colors; and fusing the plurality of color-graded segmented images according to the cutting position of each segmented image to obtain a target image.
Because there is the overlap portion between two adjacent segmentation images when cutting the first image, so in order to make a plurality of segmentation images more smooth in the image display after the fusion, moire elimination module can confirm each segmentation image and corresponding fusion matrix according to the cutting position that each segmentation image corresponds.
It should be noted that, the fusion matrix is a preset matrix describing the color gradient of the binary image, that is, the fusion matrix is a matrix changing between 0-1 or 1-0, and the color gradient part in the binary image is used for indicating the overlapping part in the image. Fig. 10 is a schematic diagram illustrating different fusion matrices represented by different binary images according to an embodiment of the present application. In fig. 10 (a), a binary image a and a binary image B are shown, in which the pixel value of the pixel point at the right part of the binary image a changes gradually from left to right between 1 and 0, and the pixel value of the pixel point at the left part of the binary image B changes gradually from left to right between 0 and 1. Fig. 10 (b) illustrates a binary image C in which the pixel values of the pixels at the lower part are graded between 1 and 0 from top to bottom and a binary image D in which the pixel values of the pixels at the upper part are graded between 0 and 1 from top to bottom.
In some embodiments, the operation of the moire cancellation module determining the fusion matrix corresponding to each tile image according to the cutting position corresponding to each tile image includes: determining the overlapping quantity and the overlapping position of the frame of each segmented image and the frame of the first image according to the corresponding cutting position of each segmented image; and determining a fusion matrix corresponding to each segmented image according to the corresponding overlapping quantity and overlapping position of each segmented image.
The overlapping of the frame of the tile image and the frame of the first image means that the frame of the tile image and the frame of the first image overlap.
As one example, in the case where the number of overlaps is 4 and the overlapping positions are the left frame, the right frame, the upper frame, and the lower frame, respectively, it is determined that there is no corresponding fusion matrix.
Since the first image is not cut in the case where the number of overlaps is 4 and the overlapping positions are the left frame, the right frame, the upper frame, and the lower frame, respectively, in this case, fusion of the tile images is not required, and therefore, there is no corresponding fusion matrix. In this case, since the cutting is not performed, the moire eliminating module may not perform the determination operation of the fusion matrix. The embodiment of the present application is not particularly limited thereto.
As an example, in the case where the number of overlaps is 3 and the overlapping positions are the left frame, the upper frame, and the lower frame, respectively, it is determined that the fusion matrix corresponding to the segmented image is a fusion matrix in which color gradation occurs for the pixel values of the pixel points on the right side; under the condition that the overlapping number is 3 and the overlapping positions are respectively a right frame, an upper frame and a lower frame, determining that the fusion matrix corresponding to the segmented image is a fusion matrix with color gradient of pixel values of pixel points at the left part; under the condition that the overlapping number is 3 and the overlapping positions are respectively a left frame, an upper frame and a right frame, determining that a fusion matrix corresponding to the segmented image is a fusion matrix with color gradient of pixel values of pixel points at the lower part; and under the condition that the overlapping number is 3 and the overlapping positions are respectively a left frame, a lower frame and a right frame, determining the fusion matrix corresponding to the segmented image as a fusion matrix with color gradient of pixel values of pixel points at the upper part. For example, the respective fusion matrices may refer to the fusion matrices indicated by the respective binary images shown in fig. 10 described above.
As an example, in the case where the number of overlaps is 2 and the overlapping positions are the left frame and the upper frame, respectively, it is determined that the fusion matrix corresponding to the segmented image is a fusion matrix in which the pixel values of the pixels on the right portion undergo color gradation, and a fusion matrix in which the pixel values of the pixels on the lower portion undergo color gradation; under the condition that the overlapping number is 2 and the overlapping positions are respectively a left frame and a lower frame, determining that the fusion matrix corresponding to the segmented image is a fusion matrix with color gradient of pixel values of pixel points on the right side and a fusion matrix with color gradient of pixel values of pixel points on the upper side; under the condition that the overlapping number is 2 and the overlapping positions are respectively the right frame and the upper frame, determining that the fusion matrix corresponding to the segmented image is a fusion matrix with color gradient of pixel values of pixel points at the left part and a fusion matrix with color gradient of pixel values of pixel points at the lower part; and under the condition that the overlapping number is 2 and the overlapping positions are respectively the right frame and the lower frame, determining the fusion matrix corresponding to the segmented image as a fusion matrix with the color gradient of the pixel value of the pixel point at the left part and a fusion matrix with the color gradient of the pixel value of the pixel point at the upper part.
As an example, in the case where the number of overlaps is 1 and the overlapping positions are left frames, respectively, it is determined that the fusion matrix corresponding to the fragmented image is a fusion matrix in which color gradation occurs to the pixel values of the pixels on the right side, a fusion matrix in which color gradation occurs to the pixel values of the pixels on the upper side, and a fusion matrix in which color gradation occurs to the pixel values of the pixels on the lower side; under the condition that the overlapping number is 1 and the overlapping positions are respectively right frames, determining that the fusion matrix corresponding to the segmented image is a fusion matrix with color gradation of pixel values of pixel points at the left side, a fusion matrix with color gradation of pixel values of pixel points at the upper side and a fusion matrix with color gradation of pixel values of pixel points at the lower side; under the condition that the overlapping number is 1 and the overlapping positions are respectively upper frames, determining that the fusion matrix corresponding to the segmented image is a fusion matrix with color gradation of pixel values of pixel points at the left side, a fusion matrix with color gradation of pixel values of pixel points at the right side and a fusion matrix with color gradation of pixel values of pixel points at the lower side; and under the condition that the overlapping number is 1 and the overlapping positions are respectively lower frames, determining the fusion matrix corresponding to the segmented image as a fusion matrix with color gradient of the pixel values of the pixel points at the left part, a fusion matrix with color gradient of the pixel values of the pixel points at the right part and a fusion matrix with color gradient of the pixel values of the pixel points at the upper part.
As an example, in the case where the overlap number is 0, the fusion matrix corresponding to the tile image is determined as a fusion matrix in which the pixel value of the pixel point at the left side portion is color-graded, a fusion matrix in which the pixel value of the pixel point at the right side portion is color-graded, a fusion matrix in which the pixel value of the pixel point at the upper side portion is color-graded, and a fusion matrix in which the pixel value of the pixel point at the lower side portion is color-graded.
As an example, the moire elimination module may compare the coordinate position of the frame pixel point of each tile image in the preset coordinate system with the coordinate position of the frame pixel point of the first image in the preset coordinate system to determine the overlapping frame and the overlapping number corresponding to each tile image. In an exemplary embodiment, the upper frame of the segmented image and the upper frame of the first image are determined to overlap when the ordinate of any one of the pixel points in the upper frame of the segmented image is the same as the ordinate of any one of the pixel points in the upper frame of the first image.
It should be noted that, the coordinate system where the tile image and the first image are located may be the same coordinate system, the preset coordinate system may use the top left vertex of the first image as the origin, the straight line where the upper frame is located is the X axis (abscissa), the straight line where the left frame is located is the Y axis (ordinate), or may be a coordinate system established in other manners, which is not limited in particular in the embodiment of the present application.
As an example, the moire eliminating module may determine the corresponding fusion matrix according to the overlapping number and overlapping positions of the frame of each segmented image and the frame of the first image, or may determine the positions of the four vertices of the segmented image relative to the first image according to other manners, for example, the determining manner may also determine the positions of the four vertices in the preset coordinate system and the four vertices and/or the four frames of the first image according to the coordinates of the four vertices in the preset coordinate system, which are not described in detail in the embodiment of the present application.
Since the color gradation portion is indicated in the fusion matrix as a portion where overlapping occurs in the images, a portion where color gradation occurs in each of the plurality of color gradation tile images is a portion where overlapping occurs in each of the plurality of color gradation tile images.
In some embodiments, the moire eliminating module fuses the plurality of color graded segmented images according to the cutting position of each segmented image, and the operation of obtaining the target image includes: sorting the plurality of color-graded segmented images according to a cutting sequence according to the cutting position of each segmented image, wherein the cutting sequence is used for indicating the corresponding positions of the plurality of color-graded segmented images in the first image; and fusing two adjacent color-graded segmented images in the sequenced color-graded segmented images to obtain a target image.
As one example, the moire cancellation module may fuse overlapping portions of two adjacent color graded tile images. The fusion may be by adding overlapping portions of the two adjacent color graded tile images. For example, a scene in which the moire eliminating module merges overlapping portions in adjacent two color-graded tile images may refer to the scene shown in fig. 10 (a) and (b).
In some embodiments, the moire elimination module may also fuse the plurality of segmented images for eliminating the moire in other manners, for example, the moire elimination module may also fuse the plurality of segmented images for eliminating the moire by poisson fusion, multi-band fusion, or the like, which is not limited in particular by the embodiment of the present application.
In some embodiments, the moire eliminating module may not have an overlapping portion between two adjacent tile images during the cutting of the first image, that is, the preset overlapping size is not set during the cutting of the first image. After a plurality of segmented images with moire eliminated are obtained, the segmented images with moire eliminated can be directly spliced, and the process is not a fusion operation, so that obvious boundary sense exists in the spliced target image, and as shown in a (a) diagram in fig. 11, obvious boundary sense exists in the spliced target image, and the image display quality is not ideal. Therefore, in order to improve the image display quality, the moire eliminating module may cut the first image according to the operation of step 810, and as the overlapping portions between the two adjacent segmented images can be fused, it is ensured that there is no obvious boundary feeling after the images are fused, and the image display quality is improved, as shown in fig. 11 (b).
In some embodiments, the size of the first image may be greater than the first target size, and of course, the size of the first image may also be less than or equal to the first target size, where the size of the first image is less than or equal to the first target size, the moire eliminating module may eliminate the moire carried by the first image in other ways.
As an example, in case the size of the first image is smaller than the first target size, the first image may be resampled to obtain a fourth reference image, the size of which is the first target size; and eliminating the mole marks of the fourth reference image through the target elimination model to obtain a target image. And under the condition that the size of the first image is equal to the first target size, the moire eliminating module can eliminate the moire of the first image directly through the target eliminating model to obtain a target image.
It is worth to be noted that, when the size of the first image is smaller than the first target size, the size of the obtained fourth reference image accords with the size requirement of the target elimination model by resampling the first image, so that elimination of the moire of the images with various sizes is ensured, and reliability of eliminating the moire of the images is improved.
In some embodiments, the moire removal module may further filter the first image before determining the at least one tile image of the first image if the size of the first image is not the first target size.
Since the first image has moire, in the case that the size of the first image is not the first target size, the first image needs to be cut and/or resampled, and the resampling of the first image may aggravate the moire in the first image, so in order to avoid aggravating the moire in the first image due to resampling, the moire eliminating module may further perform a low-pass filtering process on the first image before determining the at least one tile image of the first image.
Illustratively, to intuitively understand the effect of low-pass filtering on resampling, embodiments of the present application illustrate how low-pass filtering reduces moire emphasis by the comparative schematic of fig. 12. Referring to fig. 12, after the first image is acquired by acquiring the content displayed by the display screen, the display frequency of the first image is obviously changed compared with the frequency of the content displayed by the display screen, and the display frequency in the first image is aliased, so that moire is generated, if resampling is performed on the basis, the frequency aliasing is obviously emphasized, and the moire in the resampled first image is further emphasized. If the low-pass filtering is performed on the first image before resampling, the aliasing of the frequency can be obviously reduced, so that the frequency aliasing is not emphasized after resampling the filtered first image, and further the moire in the first image is not emphasized, and a better moire removing effect can be achieved after eliminating the moire carried by the first image through the target elimination model.
Step 813: the moire eliminating module sends the target image to the multimedia database.
Step 814: the multimedia database receives the target image.
Step 815: the multimedia database stores the target image and sends the target image to the gallery.
It should be noted that the multimedia database may store the target image and the first image at the same time, thereby avoiding loss of image information.
Step 816: the gallery receives the target image and displays the target image.
In this embodiment of the present application, in the case where the size of the first image is large and does not meet the size requirement of the target elimination model trained in advance, at least one segmented image of the first image may be determined, and since the size of the at least one segmented image meets the size requirement of the target elimination model, the moire in the at least one segmented image can be eliminated by the target elimination model. Since the elimination of the moire in the first image can be realized by determining at least one segmented image of the first image and eliminating the moire of each segmented image in the at least one segmented image under the condition that the size of the first image is large, the elimination of the moire in the first image can be ensured, and the elimination of the moire in images with various sizes can be ensured. And because the network structure of the target elimination model is an encoding-decoding structure, the network structure is simpler, and the occupied memory is small, thereby improving the operation efficiency of the electronic equipment.
Because the electronic equipment not only can comprise a camera, a gallery, a mole pattern elimination module and a multimedia database, but also can comprise other modules, for example, the electronic equipment can also comprise a file scanning module, and the electronic equipment can also realize elimination of mole patterns through the camera, the file scanning module and the mole pattern elimination module. Referring to fig. 13, fig. 13 is a flowchart of a method for eliminating moire according to another exemplary embodiment, which is used in an electronic device by way of example and not limitation, and is described by way of example with regard to a camera, a file scanning module and a moire eliminating module that are included in the electronic device, where the method may include some or all of the following:
step 1301: and the file scanning module receives file scanning operation triggered by a user.
Step 1302: and the file scanning module responds to the file scanning operation and sends a starting message to the camera.
Step 1303: the camera receives the starting message and starts.
Step 1304: the camera collects a first image and sends the first image to the file scanning module.
Step 1305: the file scanning module receives the first image and displays the first image.
Step 1306: the document scanning module receives a moire eliminating operation triggered by a user.
Step 1307: the document scanning module sends the first image to the moire eliminating module.
The operations from step 1308 to step 1312 may refer to the operations from step 808 to step 812, which are not described in detail in the embodiments of the present application.
In this embodiment of the present application, in the case where the size of the first image is large and does not meet the size requirement of the target elimination model trained in advance, at least one segmented image of the first image may be determined, and since the size of the at least one segmented image meets the size requirement of the target elimination model, the moire in the at least one segmented image can be eliminated by the target elimination model. Since the elimination of the moire in the first image can be realized by determining at least one segmented image of the first image and eliminating the moire of each segmented image in the at least one segmented image under the condition that the size of the first image is large, the elimination of the moire in the first image can be ensured, and the elimination of the moire in images with various sizes can be ensured. And because the network structure of the target elimination model is an encoding-decoding structure, the network structure is simpler, and the occupied memory is small, thereby improving the operation efficiency of the electronic equipment.
Next, the embodiment of the application explains a method flow of training the electronic equipment to obtain the target elimination model. Referring to fig. 14, fig. 14 is a schematic flow chart of a model training method provided in an embodiment of the present application, which is used herein for illustration and not limitation, and the method may include some or all of the following:
step 1401: the electronic device acquires a plurality of negative sample images.
It should be noted that, each negative image in the plurality of negative images is obtained by shooting the display content of the corresponding display screen, and each negative image in the plurality of negative images carries moire.
In some embodiments, the plurality of negative sample images may be acquired by the electronic device through a camera thereof, or may be sent to the electronic device after being shot by other devices.
It should be noted that the plurality of negative sample images may be a plurality of negative sample images including different sizes, or may be a plurality of negative sample images having the same size, and when the plurality of negative sample images are obtained by photographing, the distances between the camera and the corresponding display screen may be the same, or may be different, that is, the depth information of the plurality of negative sample images may be the same, or may be different. Of course, the distances between the camera and the content displayed on the display screen are different during shooting, and the severity of the moire generated in the collected images is also different, for example, referring to fig. 15, the moire situation in the image collected 20 cm away from the display screen is lighter, and the moire situation in the image collected 10 cm away from the display screen is more serious when seen from the human eyes. Therefore, in general, in order to increase the diversity of training data, the depth information of the plurality of negative sample images is different.
Step 1402: for each negative image in the plurality of negative images, the electronic device obtains a screen capture image of the display content of the display screen corresponding to each negative image.
Because the positive sample image is required to have no moire, and the screen capturing image obtained by directly capturing the content displayed by the display screen does not have moire, the electronic equipment can acquire the screen capturing image of the display content of the display screen corresponding to each negative sample image.
In some embodiments, the electronic device may directly determine the obtained screenshot image as a positive sample image, but to reduce the stress of model training, the electronic device may also perform some processing on the screenshot image to obtain the positive sample image. The processing of the screen capturing image corresponding to each negative sample image by the electronic device may refer to steps 1403-1404 described below.
Step 1403: and performing topology transformation on the screen capturing image corresponding to each negative sample image to obtain a topology transformation screen capturing image corresponding to each negative sample image.
It should be noted that the data features of the topologically transformed screen capture image are the same as the data features of the corresponding negative sample image.
Because the content of the display screen corresponding to each negative sample image is not just opposite to the display screen in the process of being shot, and other background patterns can exist in the negative sample images besides the content of the display screen, in order to ensure the consistency of data characteristics between the screen capturing images and the corresponding negative sample images, the electronic equipment can perform topological transformation on the screen capturing images corresponding to each negative sample image.
As an example, the electronic device may extract the feature point of each negative sample image and the feature point of the corresponding screenshot image, and perform topology transformation on the screenshot image corresponding to each negative sample image according to the feature point of each negative sample image and the feature point of the corresponding screenshot image, to obtain a topology transformed screenshot image corresponding to each negative sample image.
For example, referring to fig. 16, one of the negative sample images a and the corresponding screenshot image a acquired by the electronic device may be as shown in fig. 16, and after the electronic device performs topology transformation on the screenshot image a, the data features of the obtained topology-transformed screenshot image a are consistent with the data features of the negative sample image a.
Step 1404: and migrating the color characteristics of each negative sample image into the corresponding topological transformation screen capturing image to obtain a positive sample image corresponding to each negative sample image.
It should be noted that, the positive sample image corresponding to each negative sample image does not carry moire, the negative sample images are in one-to-one correspondence with the positive sample images, and the negative sample images and the positive sample images form training data for model training.
In general, the electronic device may directly determine the topology transformation screen capturing image as a positive sample image, in which case, the electronic device may perform model training according to the operation of step 1405 described below, where the obtained elimination model may remove the mole patterns carried by the image, and cause damage to the original information of the image, or make the figure image unsightly, for example, referring to fig. 17, for the figure image, after the mole patterns are removed by the elimination model, the color of the figure in the image may be excessively dark, and the aesthetic appearance of the figure may be affected; for an image with dark original color, after the mole lines are eliminated by eliminating the model, the image is possibly darker at the same time, so that a user cannot see information in the image; for some images, eliminating the moire by eliminating the model may also cause that part of the content in the image is too bright or the image information is lost, and the user can not see the information in the image. Therefore, in order to avoid that any one image color is converted in the process of eliminating the mole pattern carried by the any image through the trained elimination model, the electronic device can unify the color of each negative-sample image and the corresponding topological conversion screen capturing image. Typically, the electronic device may migrate the color features of each negative sample image into the corresponding topologically transformed screen capture image, thereby obtaining a positive sample image corresponding to each negative sample image.
In some embodiments, the operation of the electronic device to migrate the color feature of each negative sample image into the corresponding topology-conversion screenshot image may refer to the related art, for example, referring to fig. 16, the electronic device may determine a MASK (MASK) area in the topology-conversion screenshot image according to the color of the data feature of the topology-conversion screenshot image, and perform an open operation on the MASK area to ensure that the color of the pixel points in the MASK area is black. And then completing color migration based on the MASK region and the Reinhard criterion.
As one example, the electronic device may perform color migration based on the MASK region and the Reinhard criterion by the fourth formula described below.
In the fourth formula, I Primitive lab Pixel values in LAB (Lab color space) space for MASK regions in topologically transformed screen shots, I Reference lab For pixel values in lab space corresponding to MASK regions in the negative sample image, mean () represents taking the mean and var () represents taking the variance.
Since the data features of the topology conversion screenshot region are the same as those of the corresponding negative-sample image, the MASK region in the topology conversion screenshot image is the same as the MASK region of the corresponding negative-sample image.
Step 1405: and carrying out iterative training on the initial elimination model based on the training data so as to obtain a target elimination model.
It should be noted that the network structure of the initial cancellation model is an encoding-decoding structure, and the initial cancellation model may be a U-net model, for example.
In some embodiments, the model may be provided with a loss function, and the loss function is used in a model training phase, and after each batch of training data is input into the initial cancellation model, a predicted value is output through forward propagation, and a difference value (also referred to as a loss value) between the predicted value and a true value can be calculated through the loss function. After obtaining the loss value, the initial elimination model updates each parameter through back propagation to reduce the loss between the real value and the predicted value, so that the generated predicted value approaches to the real value direction, and the training purpose is achieved. The loss function of the target cancellation model may be as follows.
In the loss function (5), I GT For topologically transforming the data of the screen shots,for the output data of the target elimination model, L is the loss value of the loss function, +.>For the Charbonnier Loss,is Pertoptual Loss >Gradient loss>Is SSIM loss (image quality loss function).
In some embodiments, the Loss function may be as described above, or may be another Loss function, for example, the Loss function is a Loss function formed by at least one of Charbonnier Loss, perceptual Loss, gradient Loss, and SSIM Loss, which is not particularly limited by the embodiment of the present application.
In some embodiments, the electronic device may determine that the iterative training is completed when the loss value of the loss function is less than a preset value, where the preset value may be preset according to the requirement, so as to obtain the target elimination model.
In the embodiment of the application, the colors of each negative sample image and the corresponding positive sample image are unified, so that the pressure for learning the brightness of the images in the model training process is reduced, the model can be focused on removing the learning moire, the change of the colors of the images in the subsequent moire elimination process is avoided, and the damage to the image information is reduced.
Next, referring to fig. 18, fig. 18 is a schematic flow chart of a method for eliminating moire according to an embodiment of the present application, which is illustrated herein by way of example and not limitation, and the method may include some or all of the following:
Step 1801: the first image is displayed.
The first image is a moire-like image. The first image may be obtained by shooting the content displayed on the display screen by the electronic device through the camera, or may be sent to the electronic device by other devices. For example, referring to fig. 19, the electronic device displays a first image that is a moire-bearing image, that is, the electronic device acquires a moire image.
Step 1802: a first user operation is received on a first image.
Since the moire of the first image affects the display quality of the image, in order to improve the display quality of the first image, a user may perform some operations to enable the electronic device to eliminate the moire of the first image. For example, the scenario may refer to the application scenario shown in fig. 5 or fig. 6 described above.
Step 1803: in response to a first user operation, at least one tile image of the first image is determined in the event that the size of the first image is greater than a first target size.
The size of each tile image in the at least one tile image is the first target size.
It is worth noting that the interactivity with the user is improved by performing the elimination operation of the moire in the first image in response to the first user operation triggered by the user.
In some embodiments, the electronic device may determine, in response to a first user operation, at least one tile image of the first image if the size of the first image is greater than the first target size. In other cases, the at least one tile of the first image may be determined if the size of the first image is greater than the first target size, for example, the electronic device may automatically identify whether the first image is moire, and determine the at least one tile of the first image if the first image is moire, and the size of the first image is greater than the first target size.
Because the electronic device eliminates moire in the first image in different manners for the first image with different sizes, the electronic device needs to determine the size of the first image, and determine at least one tile image of the first image if the size of the first image is greater than the first target size.
As one example, in the case where the size of the first image is larger than the first target size, the electronic device can determine at least one tile image of the first image in different ways, and explanation will be made by taking the following two ways as examples.
In one possible implementation, where the size of the first image is greater than the first target size, determining at least one tile image of the first image includes: determining the number of slices for slicing the first image according to the size of the first image, the first target size and a preset overlapping size under the condition that the size of the first image is larger than the first target size; under the condition that the number of slices is larger than 1, cutting the first image according to the first target size, the preset overlapping size and the number of slices to obtain a plurality of slice images; and under the condition that the slice number is 1, resampling the first image to obtain a first reference image, wherein the size of the first reference image is a first target size, and the first reference image is a slice image of the first image.
It should be noted that, in the case that the size of the first image is greater than the first target size, the specific manner of determining the at least one segmented image of the first image by the electronic device may refer to the operation of determining the at least one segmented image of the first image by the moire eliminating module in the step 810, which is not described in detail in the embodiment of the present application.
It should be noted that, by determining the number of slices of the first image, the processing mode of the first image can be accurately selected, so that the accuracy of processing the first image is improved.
In some embodiments, in a case where the number of slices is greater than 1, performing a cutting operation on the first image according to the first target size, the preset overlap size, and the number of slices, the operation of obtaining the plurality of slice images includes: under the condition that the number of the slices is larger than 1, determining a second target size according to the first target size, the preset overlapping size and the number of the slices, wherein the second target size is a size which does not need resampling, and the second target size is larger than the first target size; resampling the first image to obtain a second reference image, wherein the size of the second reference image is the second target size; and cutting the second reference image according to the first target size, the preset overlapping size and the number of slices to obtain a plurality of slice images.
It should be noted that, when the number of slices is greater than 1, the electronic device performs a cutting operation on the first image according to the first target size, the preset overlapping size and the number of slices, so that the specific operation of obtaining the plurality of segmented images may perform the cutting operation on the first image by using the moire eliminating module in the step 810 according to the first target size, the preset overlapping size and the number of slices, so that the operation of obtaining the plurality of segmented images is not described in detail in the embodiment of the present application.
It should be noted that, when the size of the first image is not the second target size, the first image is resampled, so that the first image can be completely cut, and the image information of the first image is avoided to the greatest extent possible.
In some embodiments, the electronic device resampling the first image if the size of the first image is not the second target size comprises: downsampling the first image if the size of the first image is greater than the second target size; the first image is upsampled if the size of the first image is smaller than the second target size.
In another possible implementation, where the size of the first image is greater than the first target size, determining at least one tile image of the first image includes: under the condition that the size of the first image is larger than the first target size, cutting the first image according to the first target size and the preset overlapping size to obtain at least one third reference image, wherein the size of the at least one third reference image is not larger than the first target size; resampling the third reference image having a size smaller than the first target size such that the resampled third reference image has a size of the first target size in the case that the third reference image having a size smaller than the first target size exists in the at least one third reference image; and determining the non-resampled third reference image and the resampled third reference image as the sliced image of the first image to obtain at least one sliced image.
It should be noted that, in the case where the size of the first image is greater than the first target size, the operation of determining, by the electronic device, the at least one tile image of the first image may refer to the operation of determining, by the moire eliminating module, the at least one tile image of the first image in another possible implementation manner when the size of the first image is greater than the first target size in the above step 810, which is not described in detail herein in the embodiments of the present application.
It is worth to say that, through directly cutting the first image according to the first target size and the preset overlapping size, the calculation complexity is reduced, and the cutting efficiency is improved. In addition, the third reference image with the size smaller than the first target size is resampled, so that the first image can be ensured to be cut and complete image information can be kept, loss of the image information is avoided, and the accuracy of image cutting is improved.
Step 1804: and eliminating the mole patterns carried by each segmented image in at least one segmented image through a target elimination model to obtain at least one segmented image with the mole patterns eliminated.
The target elimination model is a neural network model for eliminating moire trained in advance, and the network structure of the target elimination model is an encoding-decoding structure.
In some embodiments, the size of the first image may be greater than the first target size, and of course, the size of the first image may also be less than or equal to the first target size, where the electronic device may eliminate moire carried by the first image in other ways.
As an example, in case the size of the first image is smaller than the first target size, the first image may be resampled to obtain a fourth reference image, the size of which is the first target size; and eliminating the mole marks of the fourth reference image through the target elimination model to obtain a target image. Under the condition that the size of the first image is equal to the first target size, the electronic equipment can directly eliminate the mole patterns of the first image through the target elimination model to obtain a target image.
It is worth to be noted that, when the size of the first image is smaller than the first target size, the size of the obtained fourth reference image accords with the size requirement of the target elimination model by resampling the first image, so that elimination of the moire of the images with various sizes is ensured, and reliability of eliminating the moire of the images is improved.
In some embodiments, the moire removal module may further filter the first image before determining the at least one tile image of the first image if the size of the first image is not the first target size.
Since the first image has moire, in the case that the size of the first image is not the first target size, the first image needs to be cut and/or resampled, and the resampling of the first image may aggravate the moire in the first image, so in order to avoid aggravating the moire in the first image due to resampling, the electronic device may further perform a low-pass filtering process on the first image before determining the at least one tile image of the first image. That is, referring to fig. 19, after the electronic device acquires the moire image (i.e. the first image), since the moire image may need to be resampled, the electronic device may also perform a low-pass filtering process on the moire image, and then perform image segmentation and/or image resampling, which is illustrated in fig. 19 as an example of image segmentation.
It is worth to say that, through carrying out low-pass filtering processing to the first image, when carrying out resampling to the first image in the follow-up, avoided aggravating the frequency aliasing condition in the first image, and then after the mole line that the first image was carried in elimination through the target elimination model, can have better mole line effect that removes.
In some embodiments, before the target elimination model eliminates the moire carried by each of the tile images in the at least one tile image to obtain at least one tile image with the moire eliminated, the electronic device may further obtain training data, where the training data includes a plurality of negative sample images and a plurality of positive sample images, each of the plurality of negative sample images carries the moire, each of the plurality of positive sample images does not carry the moire, and the plurality of negative sample images corresponds to the plurality of positive sample images one to one; and carrying out iterative training on the initial elimination model based on the training data to obtain a target elimination model, wherein the network structure of the initial elimination model is an encoding-decoding structure.
For example, the process of training the initial cancellation model may refer to the training process schematic shown in fig. 20, after training data is acquired, the training data may be input into the initial cancellation model in pairs, and during the training process, a moire-free image corresponding to each negative sample image may be output under the constraints of a plurality of positive sample images and a loss function.
It is worth to say that, because the network structure is the target elimination model of the encoding-decoding structure simple in construction, the target elimination model occupies the memory less, can be disposed in the electronic equipment, and has guaranteed the operating efficiency of the electronic equipment.
In some embodiments, the target elimination model may be deployed in an electronic device or in a cloud, which is not particularly limited in the embodiments of the present application.
In some embodiments, the operation of the electronic device to obtain training data comprises: acquiring a plurality of negative sample images, wherein each negative sample image in the plurality of negative sample images is obtained by shooting the display content of a corresponding display screen; for each negative sample image in the plurality of negative sample images, acquiring a screen capturing image of display content of a display screen corresponding to each negative sample image; performing topology transformation on the screen capturing image corresponding to each negative sample image to obtain a topology transformation screen capturing image corresponding to each negative sample image, wherein the data characteristics of the topology transformation screen capturing image are the same as the data characteristics of the corresponding negative sample image; and migrating the color characteristics of each negative sample image into the corresponding topological transformation screen capturing image to obtain a positive sample image corresponding to each negative sample image.
It should be noted that, the specific operation of the electronic device to obtain the training data may refer to the operations of the steps 1401 to 1404, which are not described in detail in the embodiment of the present application.
It is worth to say that, through unifying the color of each negative sample image and the corresponding positive sample image, the pressure of learning the image brightness in the model training process is reduced, the model can concentrate on the removal of learning moire, the image color change in the subsequent moire elimination process is avoided, and the damage to image information is reduced.
Step 1805: a moire-removed target image is determined based on at least one moire-removed tile image.
In some embodiments, the operation of the electronic device to determine the moire-removed target image based on the at least one moire-removed tile image comprises: determining the segmented image with the moire eliminated as a target image under the condition that the number of the segmented images with the moire eliminated is 1; and under the condition that the number of at least one segmented image for eliminating moire is a plurality of segmented images, fusing the plurality of segmented images for eliminating moire to obtain a target image. For example, referring to fig. 19, after the electronic device performs a fusion operation of the tile images (at least one tile image with moire removed performs fusion), a moire-free image (i.e., a target image) may be obtained.
It is worth to say that, in the case that the number of at least one segmented image eliminating moire is different, the target image is determined in different manners, so that the reliability of determining the target image is improved.
In some embodiments, in a case that the number of at least one moire-removed tile image is a plurality, the electronic device fuses the plurality of moire-removed tile images, and the operation of obtaining the target image includes: determining a cutting position of each of the plurality of segmented images when the number of at least one segmented image with moire eliminated is a plurality of; determining a fusion matrix corresponding to each segmented image according to the corresponding cutting position of each segmented image, wherein the fusion matrix is used for enabling the overlapping part in the corresponding segmented image to generate color gradient; multiplying each segmented image with a corresponding fusion matrix to obtain a plurality of segmented images with gradient colors; and fusing the plurality of color-graded segmented images according to the cutting position of each segmented image to obtain a target image.
It should be noted that, when the number of at least one segmented image for eliminating moire is plural, the electronic device fuses the plural segmented images for eliminating moire, and the specific operation for obtaining the target image may refer to the operation that the mole pattern eliminating module fuses the plural segmented images for eliminating moire in the step 812 to obtain the target image, which is not described in detail in the embodiment of the present application.
It is worth to say that, fuse at least one and dispel the segmentation picture of moire through the fusion matrix, thus has avoided appearing obvious boundary sense in the fusion picture, has promoted the image display quality.
In the embodiment of the application, under the condition that the size of the first image is larger and does not meet the size requirement of the pre-trained target elimination model, at least one segmented image of the first image can be determined, and because the size of the at least one segmented image meets the size requirement of the target elimination model, the mole patterns in the at least one segmented image can be eliminated through the target elimination model. Since the elimination of the moire in the first image can be realized by determining at least one segmented image of the first image and eliminating the moire of each segmented image in the at least one segmented image under the condition that the size of the first image is large, the elimination of the moire in the first image can be ensured, and the elimination of the moire in images with various sizes can be ensured. And because the network structure of the target elimination model is an encoding-decoding structure, the network structure is simpler, and the occupied memory is small, thereby improving the operation efficiency of the electronic equipment.
Next, an electronic device according to an embodiment of the present application will be described.
Fig. 21 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 21, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces, such as may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C interfaces. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C interfaces. Such as: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through the I2C interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S interfaces. The processor 110 may be coupled to the audio module 170 through an I2S interface to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset.
The UART interface is a universal serial data bus for asynchronous communications. The UART interface may be a bi-directional communication bus. The UART interface may convert data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. Such as: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The USB interface 130 may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (flex), a mini, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being an integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being an integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, and so on.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. Thus, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, such as referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. Such as storing files of music, video, etc. in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (e.g., audio data, phonebook, etc.) created by the electronic device 100 during use, and so forth. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions such as music playing, recording, etc. through the audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, and application processor, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. Such as: when a touch operation with the touch operation intensity smaller than the pressure threshold is applied to the short message application icon, executing an instruction for checking the short message. And executing the instruction of newly creating the short message when the touch operation with the touch operation intensity being larger than or equal to the pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The acceleration sensor 180E may also be used to identify the gesture of the electronic device 100, and may be used in applications such as landscape switching, pedometers, and the like.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, in a shooting scene, the electronic device 100 may range using the distance sensor 180F to achieve fast focus.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor 180K may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys or touch keys. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, data subscriber line (Digital Subscriber Line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium such as a floppy Disk, a hard Disk, a magnetic tape, an optical medium such as a digital versatile Disk (Digital Versatile Disc, DVD), or a semiconductor medium such as a Solid State Disk (SSD), etc.
The above embodiments are not intended to limit the present application, and any modifications, equivalent substitutions, improvements, etc. within the technical scope of the present application should be included in the scope of the present application.

Claims (13)

1. A method for eliminating moire applied to an electronic device, the method comprising:
determining at least one sliced image of a first image, wherein the first image is an image with moire, the size of each sliced image in the at least one sliced image is the first target size, the first target size is the size of an image required to be input by a target elimination model, the target elimination model is a neural network model which is trained in advance and used for eliminating the moire, and the network structure of the target elimination model is an encoding-decoding structure;
eliminating the mole patterns carried by each segmented image in the at least one segmented image through the target elimination model to obtain at least one segmented image with the mole patterns eliminated;
and determining a target image for eliminating moire based on the at least one sliced image for eliminating moire.
2. The method of claim 1, wherein the determining at least one tile image of the first image if the size of the first image is greater than a first target size comprises:
determining the number of slices for slicing the first image according to the size of the first image, the first target size and a preset overlapping size when the size of the first image is larger than the first target size;
under the condition that the number of slices is larger than 1, cutting the first image according to the first target size, the preset overlapping size and the number of slices to obtain a plurality of slice images;
and under the condition that the number of slices is 1, resampling the first image to obtain a first reference image, wherein the size of the first reference image is the first target size, and the first reference image is a slice image of the first image.
3. The method of claim 2, wherein, in the case where the number of slices is greater than 1, performing a cutting operation on the first image according to the first target size, the preset overlap size, and the number of slices to obtain a plurality of slice images, including:
Determining a second target size according to the first target size, the preset overlapping size and the number of slices when the number of slices is larger than 1, wherein the second target size is a size which does not need to be resampled and is larger than the first target size;
resampling the first image to obtain a second reference image, wherein the size of the second reference image is the second target size under the condition that the size of the first image is not the second target size;
and cutting the second reference image according to the first target size, the preset overlapping size and the slicing number to obtain a plurality of slicing images.
4. The method of claim 1, wherein the determining at least one tile image of the first image if the size of the first image is greater than a first target size comprises:
under the condition that the size of the first image is larger than the first target size, cutting the first image according to the first target size and a preset overlapping size to obtain at least one third reference image, wherein the size of the at least one third reference image is not larger than the first target size;
Resampling a third reference image having a size smaller than the first target size in the case that a third reference image having a size smaller than the first target size exists in the at least one third reference image, so that the size of the resampled third reference image is the first target size;
and determining the non-resampled third reference image and the resampled third reference image as the slice images of the first image, and obtaining the at least one slice image.
5. The method of any of claims 1-4, wherein the determining a moire-eliminated target image based on the at least one moire-eliminated tile image comprises:
determining the segmented image with moire elimination as the target image under the condition that the number of the segmented images with moire elimination is 1;
and under the condition that the number of the at least one segmented image with the moire elimination is a plurality of segmented images with the moire elimination, fusing the plurality of segmented images with the moire elimination to obtain the target image.
6. The method according to claim 5, wherein, in the case that the number of the at least one moire-removed tile images is plural, fusing the plural moire-removed tile images to obtain the target image, comprises:
Determining a cutting position of each of the plurality of segmented images when the number of the at least one segmented image with moire eliminated is a plurality of;
determining a fusion matrix corresponding to each segmented image according to the corresponding cutting position of each segmented image, wherein the fusion matrix is used for enabling the overlapped part in the corresponding segmented image to generate color gradient;
multiplying each segmented image with a corresponding fusion matrix to obtain a plurality of segmented images with gradually changed colors;
and fusing the plurality of color-graded segmented images according to the cutting position of each segmented image to obtain the target image.
7. The method of claim 1, wherein the method further comprises:
resampling the first image to obtain a fourth reference image with the size of the first target size under the condition that the size of the first image is smaller than the first target size;
and eliminating the mole marks of the fourth reference image through the target elimination model to obtain the target image.
8. The method of any one of claims 1-7, wherein the method further comprises:
And in the case that the size of the first image is not the first target size, performing filtering processing on the first image before determining at least one slice image of the first image.
9. The method of claim 1, wherein the eliminating, by the target elimination model, the moire carried by each of the at least one tile image, before obtaining at least one tile image with moire eliminated, further comprises:
acquiring training data, wherein the training data comprises a plurality of negative sample images and a plurality of positive sample images, each negative sample image in the plurality of negative sample images carries mole marks, each positive sample image in the plurality of positive sample images does not carry mole marks, and the plurality of negative sample images are in one-to-one correspondence with the plurality of positive sample images;
and carrying out iterative training on an initial elimination model based on the training data to obtain the target elimination model, wherein the network structure of the initial elimination model is the coding-decoding structure.
10. The method of claim 9, wherein the acquiring training data comprises:
acquiring the plurality of negative sample images, wherein each negative sample image in the plurality of negative sample images is obtained by shooting the display content of a corresponding display screen;
For each negative sample image in the plurality of negative sample images, acquiring a screen capturing image of display content of a display screen corresponding to each negative sample image;
performing topology transformation on the screen capturing image corresponding to each negative sample image to obtain a topology transformation screen capturing image corresponding to each negative sample image, wherein the data characteristics of the topology transformation screen capturing image are the same as the data characteristics of the corresponding negative sample image;
and migrating the color characteristics of each negative sample image into the corresponding topological transformation screen capturing image to obtain a positive sample image corresponding to each negative sample image.
11. The method of any of claims 1-10, wherein the determining at least one tile image of the first image if the size of the first image is greater than a first target size comprises:
displaying the first image;
receiving a first user operation on the first image;
in response to the first user operation, at least one tile image of the first image is determined in the event that the size of the first image is greater than the first target size.
12. An electronic device, wherein the electronic device comprises a processor and a memory in its structure;
The memory is configured to store a program for supporting the electronic device to perform the method according to any one of claims 1-11.
13. A computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of claims 1-11.
CN202310488968.2A 2023-04-28 2023-04-28 Moire pattern eliminating method, electronic device and readable storage medium Pending CN117132479A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310488968.2A CN117132479A (en) 2023-04-28 2023-04-28 Moire pattern eliminating method, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310488968.2A CN117132479A (en) 2023-04-28 2023-04-28 Moire pattern eliminating method, electronic device and readable storage medium

Publications (1)

Publication Number Publication Date
CN117132479A true CN117132479A (en) 2023-11-28

Family

ID=88857096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310488968.2A Pending CN117132479A (en) 2023-04-28 2023-04-28 Moire pattern eliminating method, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN117132479A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017005644A (en) * 2015-06-16 2017-01-05 ハンファテクウィン株式会社Hanwha Techwin Co.,Ltd. Image processing apparatus, image processing method and imaging device
CN110717450A (en) * 2019-10-09 2020-01-21 深圳大学 Training method and detection method for automatically identifying copied image of original document
CN111523459A (en) * 2020-04-22 2020-08-11 中科三清科技有限公司 Remote sensing image bare area identification method and device, electronic equipment and storage medium
CN112598602A (en) * 2021-01-06 2021-04-02 福建帝视信息科技有限公司 Mask-based method for removing Moire of deep learning video
CN113706392A (en) * 2020-05-20 2021-11-26 Tcl科技集团股份有限公司 Moire pattern processing method, computer-readable storage medium and terminal device
WO2023279863A1 (en) * 2021-07-07 2023-01-12 荣耀终端有限公司 Image processing method and apparatus, and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017005644A (en) * 2015-06-16 2017-01-05 ハンファテクウィン株式会社Hanwha Techwin Co.,Ltd. Image processing apparatus, image processing method and imaging device
CN110717450A (en) * 2019-10-09 2020-01-21 深圳大学 Training method and detection method for automatically identifying copied image of original document
CN111523459A (en) * 2020-04-22 2020-08-11 中科三清科技有限公司 Remote sensing image bare area identification method and device, electronic equipment and storage medium
CN113706392A (en) * 2020-05-20 2021-11-26 Tcl科技集团股份有限公司 Moire pattern processing method, computer-readable storage medium and terminal device
CN112598602A (en) * 2021-01-06 2021-04-02 福建帝视信息科技有限公司 Mask-based method for removing Moire of deep learning video
WO2023279863A1 (en) * 2021-07-07 2023-01-12 荣耀终端有限公司 Image processing method and apparatus, and electronic device

Similar Documents

Publication Publication Date Title
CN112130742B (en) Full screen display method and device of mobile terminal
CN114205522B (en) Method for long-focus shooting and electronic equipment
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN113747085B (en) Method and device for shooting video
CN111669459B (en) Keyboard display method, electronic device and computer readable storage medium
US11949978B2 (en) Image content removal method and related apparatus
CN113364971A (en) Image processing method and device
CN111882642B (en) Texture filling method and device for three-dimensional model
US20230224574A1 (en) Photographing method and apparatus
CN115115679A (en) Image registration method and related equipment
CN113452969B (en) Image processing method and device
CN111880647B (en) Three-dimensional interface control method and terminal
CN114004732A (en) Image editing prompting method and device, electronic equipment and readable storage medium
CN113538227B (en) Image processing method based on semantic segmentation and related equipment
US20230014272A1 (en) Image processing method and apparatus
CN115880347B (en) Image processing method, electronic device, storage medium, and program product
CN113497888B (en) Photo preview method, electronic device and storage medium
CN117132479A (en) Moire pattern eliminating method, electronic device and readable storage medium
CN114283195A (en) Method for generating dynamic image, electronic device and readable storage medium
CN115880350A (en) Image processing method, apparatus, system, and computer-readable storage medium
EP4296840A1 (en) Method and apparatus for scrolling to capture screenshot
CN117036206B (en) Method for determining image jagged degree and related electronic equipment
CN114245011B (en) Image processing method, user interface and electronic equipment
CN116091572B (en) Method for acquiring image depth information, electronic equipment and storage medium
CN117152022A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination