CN117474756A - Image processing method, device, electronic equipment and computer storage medium - Google Patents

Image processing method, device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN117474756A
CN117474756A CN202310151818.2A CN202310151818A CN117474756A CN 117474756 A CN117474756 A CN 117474756A CN 202310151818 A CN202310151818 A CN 202310151818A CN 117474756 A CN117474756 A CN 117474756A
Authority
CN
China
Prior art keywords
amplified
image
processed
acquiring
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310151818.2A
Other languages
Chinese (zh)
Inventor
雷明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tcl Yunchuang Technology Co ltd
Original Assignee
Shenzhen Tcl Yunchuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tcl Yunchuang Technology Co ltd filed Critical Shenzhen Tcl Yunchuang Technology Co ltd
Priority to CN202310151818.2A priority Critical patent/CN117474756A/en
Publication of CN117474756A publication Critical patent/CN117474756A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a computer storage medium; in the embodiment of the application, an image to be processed is obtained, and an object in the image to be processed is amplified to obtain an amplified object corresponding to the object; determining the object type of the amplified object, and acquiring a cleaning model set, wherein the cleaning strategy set comprises a plurality of cleaning models; screening a cleaning model matched with the object type from the cleaning model set to obtain a target cleaning model; and carrying out the object extraction operation on the amplified object according to the object extraction model to obtain a target image containing the extracted amplified object. The embodiment of the application can improve the utilization rate of the electronic equipment.

Description

Image processing method, device, electronic equipment and computer storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a computer storage medium.
Background
Along with development of scientific technology, the photographing effect of the camera is better and more, and more images are photographed through the camera. When a user wants to view a photographed image, the user can view the photographed image through the electronic device.
The user can zoom in on the image while viewing the captured image through the electronic device. However, the sharpness of an image is limited, and when the magnification of the image is large, blurring of the magnified image may occur. Therefore, more electronic devices can limit the magnification of the image, so that the image cannot be magnified to the multiple required by the user, and the use of the electronic devices is limited.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer storage medium, which can solve the technical problem that the use ratio of the electronic equipment is low due to limiting the amplification factor.
The embodiment of the application provides an image processing method, which comprises the following steps:
acquiring an image to be processed, and amplifying an object in the image to be processed to obtain an amplified object corresponding to the object;
determining the object type of the amplified object, and acquiring a set of extraction models, wherein the set of extraction strategies comprises a plurality of extraction models;
screening a cleaning model matched with the object type from the cleaning model set to obtain a target cleaning model;
and carrying out the object extraction operation on the amplified object according to the object extraction model to obtain an object image containing the extracted amplified object.
Accordingly, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring an image to be processed, amplifying an object in the image to be processed, and obtaining an amplified object corresponding to the object;
the determining module is used for determining the object type of the amplified object and acquiring a clearing model set, wherein the clearing strategy set comprises a plurality of clearing models;
the screening module is used for screening the extraction model matched with the object type from the extraction model set to obtain a target extraction model;
and the object extraction module is used for extracting the amplified object according to the object extraction model to obtain an object image containing the extracted amplified object.
In addition, the embodiment of the application also provides electronic equipment, which comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for running the computer program in the memory to realize the image processing method provided by the embodiment of the application.
In addition, the embodiment of the application further provides a computer readable storage medium, and the computer readable storage medium stores a computer program, and the computer program is suitable for being loaded by a processor to execute any one of the image processing methods provided by the embodiment of the application.
Furthermore, embodiments of the present application provide a computer program product, including a computer program, which when executed by a processor implements any of the image processing methods provided in the embodiments of the present application.
In the embodiment of the application, an image to be processed is obtained, and an object in the image to be processed is amplified to obtain an amplified object corresponding to the object; determining the object type of the amplified object, and acquiring a clearing model set, wherein the clearing strategy set comprises a plurality of clearing models; screening a cleaning model matched with the object type from the cleaning model set to obtain a target cleaning model; and according to the target clearing model, carrying out clearing operation on the amplified object to obtain a target image containing the cleared amplified object, so that when the electronic equipment is used for checking that the cleared amplified object is clear, the amplification factor of the object is not required to be limited, and the use of the electronic equipment is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image to be processed provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a setting interface of preference information provided in an embodiment of the present application;
FIG. 4 is a schematic flow chart of an amplifying method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a determination process of a target cleaning model provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a target image provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of another image processing provided by an embodiment of the present application;
fig. 8 is a schematic structural view of an image processing apparatus provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer storage medium. The image processing apparatus may be integrated in an electronic device, which may be a server or a terminal.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, network acceleration services (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform.
The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
In addition, "plurality" in the embodiments of the present application means two or more. "first" and "second" and the like in the embodiments of the present application are used for distinguishing descriptions and are not to be construed as implying relative importance.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
In the present embodiment, description will be made from the viewpoint of an image processing apparatus, and for convenience of description of an image processing method of the present application, a detailed description will be made below with the image processing apparatus integrated in a terminal, that is, with the terminal as an execution subject.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present application. The image processing method may include:
s101, acquiring an image to be processed, and amplifying an object in the image to be processed to obtain an amplified object corresponding to the object.
The image containing the magnified object may be referred to as a candidate image, the magnified object may include at least one, and the size of the candidate image may be the same as the size of the image to be processed. For example, the image to be processed may be enlarged as shown in 201 in fig. 2, and the obtained enlarged object and candidate image may be shown in 202 in the image to be processed by the user u 1.
The terminal can acquire the image to be processed through the camera of the terminal, so that the image to be processed is acquired. Or the terminal can acquire the image to be processed through the cameras of other terminals, the other terminals send the image to be processed to the terminal, and the terminal acquires the image to be processed.
The mode of acquiring the image to be processed by the terminal may be selected according to practical situations, and the embodiment of the present application is not limited herein.
At least one object may be included in the image to be processed. The object refers to a component in the image to be processed, which may be at least one of a person, a pet, a word, a special effect, or a landscape. For example, the object may refer to a person, a tree, a flower, a sky, or the like in the image to be processed.
The terminal may respond to the zoom-in operation for the image to be processed, and take the object located on the area corresponding to the zoom-in operation as the subject object, at this time, the terminal may not need to perform object recognition for the image to be processed.
Or the terminal can identify the object in the image to be processed to obtain the object in the image to be processed, and if the image to be processed comprises one object, the object is directly amplified to obtain an amplified object corresponding to the object. If the image to be processed comprises a plurality of objects, the terminal can screen the main object from the objects and then amplify the main object so as to obtain an amplified object corresponding to the main object. The subject object is an object that the user wants to zoom in.
In some embodiments, the process of the terminal to screen the subject object from the objects may be:
acquiring user information of an amplified image to be processed;
determining preference information corresponding to the user information according to the user information;
and screening the main object from the objects of the image to be processed according to the preference information.
The user information of the image to be processed may refer to information of a user who enlarges the image to be processed. The terminal may store preference information corresponding to the user information in advance so that the preference information may be determined according to the user information after the user information is acquired. Optionally, the terminal may display an information setting control, then display a setting interface of the preference information in response to a triggering operation of the information setting control, and obtain preference information corresponding to the user information in response to an editing operation of the setting interface.
For example, the setting interface of the preference information may be as shown in fig. 3, at this time, the editing operation on the setting interface may be a selection operation, and the information corresponding to the selection operation may be the preference information corresponding to the user information.
In the embodiment of the application, the object to be amplified in the image to be processed is automatically determined according to the preference information corresponding to the user information, the user does not need to manually select, and the convenience of object amplification is improved. At this time, the user may manually set the magnification, or may set the magnification by voice.
In other embodiments, the process of the terminal for screening the main object from the objects may also be:
determining a focusing area of an image to be processed;
and taking the object in the focusing area in the image to be processed as a main object.
The object of the focusing area is interested in the user, so that the probability that the object of the focusing area is a main object is high, and therefore, in the embodiment of the application, the object which can be positioned in the focusing area is used as the main object, the object which needs to be amplified in the image to be processed is automatically determined, the user does not need to manually select, and the convenience of amplifying the object is improved.
It should be noted that, the terminal may set a preset magnification, the preset magnification may be the original maximum magnification, when the object in the image to be processed is magnified to the preset magnification, the terminal may display the clearing control, and then respond to the start operation of the clearing control, and may further zoom in on the magnified object. The information setting control and the clearing control can be the same control.
For example, as shown in fig. 4, in response to an amplifying operation for an image to be processed, an object located in an area corresponding to the amplifying operation is amplified to obtain an amplified object, whether the amplification factor of the amplified object is equal to a preset amplification factor is determined, and if the amplification factor is equal to the preset amplification factor, a clearing control is displayed. And in response to the starting operation of the clearing control and the amplifying operation of the amplifying object, continuing to amplify the amplifying object.
S102, determining the object type of the amplified object, and acquiring a set of cleaning models, wherein the set of cleaning strategies comprises a plurality of cleaning models.
The object type refers to the kind of the object, for example, the object type of the enlarged object is text if the enlarged object is "happy", and for example, the object type of the enlarged object is landscape if the enlarged object is "tree".
The lifting model refers to a neural network model that can improve the definition of an object. The type of the extraction model may be selected according to practical situations, for example, the extraction model may be a convolutional neural network model or a cyclic neural network model, which is not limited herein.
Different object types can correspond to different extraction models, so that the extraction operation speed can be improved when the extraction operation is performed on the amplified object according to the target extraction model.
When training the extraction model corresponding to the object type, the extraction model corresponding to the object type may be trained by using a sample image only including a sample object corresponding to the object type, or alternatively, the extraction model corresponding to the object type may be trained by using a first preset number of sample images including sample objects corresponding to the object type and a second preset number of sample images including sample objects corresponding to other object types, where the first preset carding is greater than the second preset number.
For example, the object type is a person, and when training the sample image corresponding to the person, the sample image corresponding to the person may be trained according to the sample image only including the person, or the sample image corresponding to the person may be trained according to the sample image including the person in most, the sample image including the landscape in a small amount, and the sample image including the pet in a small amount.
Alternatively, the training process of the extraction model may be:
acquiring a training set corresponding to each initial matching type and an initial cleaning model;
amplifying a sample image in the training set to obtain a sample amplified object;
carrying out a cleaning operation on the sample amplified object through an initial cleaning model to obtain a cleaned sample amplified object;
determining a loss value according to the extracted sample amplified object and the real label;
if the loss value meets the preset loss condition, taking the initial extraction model as an extraction model;
if the loss value does not meet the preset loss condition, updating the model parameters of the initial extraction model according to the loss value, and returning to the step of amplifying the sample image in the training set to obtain a sample amplified object.
In some embodiments, determining the object type of the magnified object includes:
acquiring the magnification of an amplified object;
and if the magnification is larger than the preset magnification, determining the object type of the amplified object.
When the magnification of the amplified object is greater than the preset magnification, the definition of the amplified object is indicated to be lower, so in the embodiment of the application, when the magnification of the amplified object is greater than the preset magnification, the object type of the amplified object is determined, so that the amplified object can be subsequently subjected to the extraction operation according to the extraction model matched with the object type.
It should be appreciated that when the purge control is displayed, the terminal may determine whether to determine the object type of the magnified object by determining whether to display the purge control. For example, when the upgrade control is displayed, the terminal may determine an object type of the magnified object.
S103, screening out a cleaning model matched with the object type from the cleaning model set to obtain a target cleaning model.
The initial object type and the extraction model association may be stored in an extraction model set, and after the object type is obtained, the terminal may match the object type with the initial object type in the extraction model set, and then use the extraction model associated with the initial object type matched with the object type as the target extraction model.
For example, as shown in fig. 5, the object type of the enlargement object is determined. If the object type is a person, the extraction model corresponding to the person is used as a target extraction model, if the object type is a word, the extraction model corresponding to the word is used as a target extraction model, and if the object type is a landscape, the extraction model corresponding to the landscape is used as a target extraction model.
In some embodiments, screening the extraction model matched with the object type from the extraction model set to obtain a target extraction model comprises:
screening a cleaning model matched with the object type from the cleaning model set to obtain a candidate cleaning model set;
acquiring a fuzzy grade of an amplified object;
and screening the target extraction model from the candidate extraction model set according to the fuzzy grade.
The higher the blur level, the greater the blur level of the enlarged object, the greater the degree of the clearing operation on the enlarged object, the lower the blur level, the smaller the blur level of the enlarged object, and the lower the degree of the clearing operation on the enlarged object. The higher the fuzzy grade, the more complex the structure, the slower the clearing speed, the higher the degree of clearing operation, the lower the fuzzy grade, the simpler the structure, the faster the clearing speed, and the lower the degree of clearing operation.
In the embodiment of the application, a plurality of candidate extraction models are set under the same object type, namely, one initial object type corresponds to one candidate extraction model set, one candidate extraction model set comprises a plurality of candidate extraction models, the candidate extraction models are extraction models matched with the object type, and then the candidate extraction models are further screened through fuzzy grades to obtain target extraction models, so that the amplified objects with higher fuzzy grades are obtained, the target extraction models are more complex, the extraction operation degree is lower, the amplified objects with lower fuzzy grades are simpler, the extraction operation degree is higher, and the extraction operation speed can be improved while the definition of the amplified objects is ensured.
Wherein, the blur level of the magnification object can be determined according to the magnification or definition of the magnification object. When the blur level of the magnification object is determined according to the magnification factor of the magnification object, acquiring the blur level of the magnification object includes:
acquiring the magnification of an amplified object;
determining a multiple difference between the amplification factor and a preset amplification factor;
and determining the fuzzy grade of the amplified object according to the multiple difference value.
The larger the multiple difference value is, the higher the blur level of the magnification object is, the smaller the multiple difference value is, and the lower the blur level of the magnification object is.
In the embodiment of the application, the blur level of the amplified object is determined by determining a multiple difference between the amplification factor and a preset amplification factor and then according to the multiple difference.
When the blur level of the magnified object is determined according to the sharpness of the magnified object, acquiring the blur level of the magnified object includes:
acquiring the definition of an amplified object;
and determining the blur level of the amplified object according to the definition.
The higher the sharpness, the lower the blur level of the magnified object, and the higher the blur level of the magnified object.
In the embodiment of the application, the blur level of the amplified object is determined through the definition of the amplified object, so that the blur level of the object can be amplified later, and the candidate extraction model is screened.
S104, according to the target cleaning model, cleaning the amplified object to obtain a target image containing the cleaned amplified object.
For example, if the magnified object is as shown at 202 in fig. 2, the target image containing the resolved magnified object may be as shown in fig. 6.
In the embodiment of the application, the object to be amplified is subjected to the object extraction operation through the object extraction model so that the object image containing the extracted object to be amplified is clear, so that the amplification factor of the electronic equipment on the image can be not limited, the user experience is improved, and the utilization rate of the electronic equipment is improved. And different object types correspond to different extraction models, so that when the extraction operation is performed on the amplified object through the target extraction model, the speed of the extraction operation can be improved.
Alternatively, after obtaining the target image containing the resolution enlargement object, the terminal may replace the target image with the candidate image so as to display the target image such that the target image can see the clear enlarged image.
For example, as shown in fig. 7, after the candidate image is acquired, a bitmap (bitmap) is drawn, and the candidate image is stored in the buffer memory according to the bitmap. And judging whether the clearing control is opened or not. And if the clearing control is not started, displaying the candidate images in the cache on a screen. And if the clearing control is started, acquiring a target clearing model, carrying out clearing operation on the amplified object of the candidate image in the cache according to the clearing model to obtain a target image, replacing the candidate image in the cache with the target image, and then displaying the target image in the cache.
When the plurality of amplified objects includes a plurality of amplified objects, the amplified objects may be respectively subjected to an extraction operation according to an extraction model matched with the object type of each amplified object, to obtain each extracted amplified object, and then, according to each extracted amplified object, the target image may be determined.
Or, the terminal may screen the target amplified object from the plurality of amplified objects, and then operate the target amplified object according to the extraction model matched with the object type of the target amplified object, so as to obtain a target image containing the extracted target amplified object.
Or, the terminal may screen out the target amplified object from the plurality of amplified objects, and then operate on each amplified object according to the extraction model matched with the object type of the target amplified object, so as to obtain the target image including each extracted amplified object.
For example, the amplified objects are users and trees, the users are used as target amplified objects, the object types of the users are people, and then the users and the trees are respectively operated according to a target cleaning model corresponding to the people, so that target images of the users and the trees after cleaning are obtained.
Alternatively, the target magnified object may be selected from among the respective magnified objects according to preference information of the user information.
From the above, in the embodiment of the present application, an image to be processed is obtained, and an object in the image to be processed is amplified to obtain an amplified object corresponding to the object; determining the object type of the amplified object, and acquiring a clearing model set, wherein the clearing strategy set comprises a plurality of clearing models; screening a cleaning model matched with the object type from the cleaning model set to obtain a target cleaning model; and according to the target clearing model, carrying out clearing operation on the amplified object to obtain a target image containing the cleared amplified object, so that when the electronic equipment is used for checking that the cleared amplified object is clear, the amplification factor of the object is not required to be limited, and the use of the electronic equipment is improved.
In order to facilitate better implementation of the image processing method provided by the embodiment of the application, the embodiment of the application also provides a device based on the image processing method. Where the meaning of the terms is the same as in the image processing method described above, specific implementation details may be referred to in the description of the method embodiments.
For example, as shown in fig. 8, the image processing apparatus may include:
the obtaining module 801 is configured to obtain an image to be processed, and amplify an object in the image to be processed to obtain an amplified object corresponding to the object.
A determining module 802 is configured to determine an object type of the magnified object and obtain a set of cleaning models, where the set of cleaning policies includes a plurality of cleaning models.
And the screening module 803 is used for screening the extraction model matched with the object type from the extraction model set to obtain a target extraction model.
And the clearing module 804 is configured to clear the magnified object according to the target clearing model, so as to obtain a target image including the magnified object after clearing.
Optionally, the screening module 803 is specifically configured to perform:
screening a cleaning model matched with the object type from the cleaning model set to obtain a candidate cleaning model set;
acquiring a fuzzy grade of an amplified object;
and screening the target extraction model from the candidate extraction model set according to the fuzzy grade.
Optionally, the screening module 803 is specifically configured to perform:
acquiring the magnification of an amplified object;
determining a multiple difference between the amplification factor and a preset amplification factor;
and determining the fuzzy grade of the amplified object according to the multiple difference value.
Optionally, the screening module 803 is specifically configured to perform:
acquiring the definition of an amplified object;
and determining the blur level of the amplified object according to the definition.
Optionally, the determining module 802 is specifically configured to perform:
acquiring the magnification of an amplified object;
and if the magnification is larger than the preset magnification, determining the object type of the amplified object.
Optionally, the obtaining module 801 is specifically configured to perform:
screening out a main object from all objects of an image to be processed;
and amplifying the main object to obtain an amplified object corresponding to the main object.
Optionally, the obtaining module 801 is specifically configured to perform:
acquiring user information of an amplified image to be processed;
determining preference information corresponding to the user information according to the user information;
and screening the main object from the objects of the image to be processed according to the preference information.
In the specific implementation, each module may be implemented as an independent entity, or may be combined arbitrarily, and implemented as the same entity or a plurality of entities, and the specific implementation and the corresponding beneficial effects of each module may be referred to the foregoing method embodiments, which are not described herein again.
The embodiment of the application also provides an electronic device, which may be a server or a terminal, as shown in fig. 9, and shows a schematic structural diagram of the electronic device according to the embodiment of the application, specifically:
the electronic device may include one or more processing cores 'processors 901, one or more computer-readable storage media's memory 902, power supply 903, and input unit 904, among other components. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 9 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
the processor 901 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing computer programs and/or modules stored in the memory 902, and calling data stored in the memory 902. Optionally, processor 901 may include one or more processing cores; preferably, the processor 901 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 901.
The memory 902 may be used to store computer programs and modules, and the processor 901 performs various functional applications and data processing by executing the computer programs and modules stored in the memory 902. The memory 902 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, a computer program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, the memory 902 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 902 may also include a memory controller to provide access to the memory 902 by the processor 901.
The electronic device further comprises a power supply 903 for powering the various components, preferably the power supply 903 is logically connected to the processor 901 via a power management system, whereby the functions of managing charging, discharging, and power consumption are performed by the power management system. The power supply 903 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The electronic device may also include an input unit 904, which input unit 904 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 901 in the electronic device loads executable files corresponding to the processes of one or more computer programs into the memory 902 according to the following instructions, and the processor 901 executes the computer programs stored in the memory 902, so as to implement various functions, for example:
acquiring an image to be processed, and amplifying an object in the image to be processed to obtain an amplified object corresponding to the object;
determining the object type of the amplified object, and acquiring a clearing model set, wherein the clearing strategy set comprises a plurality of clearing models;
screening a cleaning model matched with the object type from the cleaning model set to obtain a target cleaning model;
and carrying out the clearing operation on the amplified object according to the target clearing model to obtain a target image containing the cleared amplified object.
The specific embodiments and the corresponding beneficial effects of the above operations can be referred to the above detailed description of the image processing method, and are not described herein.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of the various methods of the above embodiments may be performed by a computer program, or by computer program control related hardware, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present embodiments provide a computer readable storage medium having stored therein a computer program that can be loaded by a processor to perform the steps of any of the image processing methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
acquiring an image to be processed, and amplifying an object in the image to be processed to obtain an amplified object corresponding to the object;
determining the object type of the amplified object, and acquiring a clearing model set, wherein the clearing strategy set comprises a plurality of clearing models;
screening a cleaning model matched with the object type from the cleaning model set to obtain a target cleaning model;
and carrying out the clearing operation on the amplified object according to the target clearing model to obtain a target image containing the cleared amplified object.
The specific embodiments and the corresponding beneficial effects of each of the above operations can be found in the foregoing embodiments, and are not described herein again.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Since the computer program stored in the computer readable storage medium may execute the steps in any one of the image processing methods provided in the embodiments of the present application, the beneficial effects that any one of the image processing methods provided in the embodiments of the present application may be achieved, which is detailed in the previous embodiments and will not be described herein.
Among other things, according to one aspect of the present application, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the above-described image processing method.
The foregoing has described in detail the methods, apparatuses, electronic devices and computer storage media for image processing according to the embodiments of the present application, and specific examples have been applied to illustrate the principles and embodiments of the present application, where the foregoing examples are provided to assist in understanding the methods and core ideas of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. An image processing method, comprising:
acquiring an image to be processed, and amplifying an object in the image to be processed to obtain an amplified object corresponding to the object;
determining the object type of the amplified object, and acquiring a cleaning model set, wherein the cleaning strategy set comprises a plurality of cleaning models;
screening a cleaning model matched with the object type from the cleaning model set to obtain a target cleaning model;
and carrying out the object extraction operation on the amplified object according to the object extraction model to obtain a target image containing the extracted amplified object.
2. The image processing method according to claim 1, wherein the step of screening out the extraction model matched with the object type from the extraction model set to obtain a target extraction model includes:
screening a cleaning model matched with the object type from the cleaning model set to obtain a candidate cleaning model set;
acquiring the fuzzy grade of the amplified object;
and screening a target extraction model from the candidate extraction model set according to the fuzzy grade.
3. The image processing method according to claim 2, wherein the acquiring the blur level of the enlargement subject includes:
acquiring the magnification of the amplified object;
determining a multiple difference value between the amplification factor and a preset amplification factor;
and determining the fuzzy grade of the amplified object according to the multiple difference value.
4. The image processing method according to claim 2, wherein the acquiring the blur level of the enlargement subject includes:
acquiring the definition of the amplified object;
and determining the blur level of the amplified object according to the definition.
5. The image processing method according to claim 1, wherein the determining the object type of the enlargement object includes:
acquiring the magnification of the amplified object;
and if the magnification is larger than a preset magnification, determining the object type of the amplified object.
6. The image processing method according to any one of claims 1 to 5, wherein the amplifying the object in the image to be processed to obtain an amplified object corresponding to the object includes:
screening out a main object from all objects of the image to be processed;
and amplifying the main object to obtain an amplified object corresponding to the main object.
7. The image processing method according to claim 6, wherein the screening out a subject object from among the objects of the image to be processed includes:
acquiring user information for amplifying the image to be processed;
determining preference information corresponding to the user information according to the user information;
and screening out a main object from the objects of the image to be processed according to the preference information.
8. An image processing apparatus, comprising:
the acquisition module is used for acquiring an image to be processed, amplifying an object in the image to be processed, and obtaining an amplified object corresponding to the object;
the determining module is used for determining the object type of the amplified object and acquiring a cleaning model set, wherein the cleaning strategy set comprises a plurality of cleaning models;
the screening module is used for screening the extraction model matched with the object type from the extraction model set to obtain a target extraction model;
and the clearing module is used for clearing the amplified object according to the target clearing model to obtain a target image containing the cleared amplified object.
9. An electronic device comprising a processor and a memory, the memory storing a computer program, the processor being configured to execute the computer program in the memory to perform the image processing method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program adapted to be loaded by a processor for performing the image processing method of any one of claims 1 to 7.
CN202310151818.2A 2023-02-14 2023-02-14 Image processing method, device, electronic equipment and computer storage medium Pending CN117474756A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310151818.2A CN117474756A (en) 2023-02-14 2023-02-14 Image processing method, device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310151818.2A CN117474756A (en) 2023-02-14 2023-02-14 Image processing method, device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN117474756A true CN117474756A (en) 2024-01-30

Family

ID=89626256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310151818.2A Pending CN117474756A (en) 2023-02-14 2023-02-14 Image processing method, device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN117474756A (en)

Similar Documents

Publication Publication Date Title
US9690980B2 (en) Automatic curation of digital images
CN111209970A (en) Video classification method and device, storage medium and server
CN110012210A (en) Photographic method, device, storage medium and electronic equipment
CN112381104A (en) Image identification method and device, computer equipment and storage medium
CN111984803A (en) Multimedia resource processing method and device, computer equipment and storage medium
CN108062405B (en) Picture classification method and device, storage medium and electronic equipment
CN115471439A (en) Method and device for identifying defects of display panel, electronic equipment and storage medium
CN112734661A (en) Image processing method and device
CN114143429B (en) Image shooting method, device, electronic equipment and computer readable storage medium
EP4340374A1 (en) Picture quality adjustment method and apparatus, and device and medium
CN117474756A (en) Image processing method, device, electronic equipment and computer storage medium
CN114205632A (en) Video preview method and device, electronic equipment and computer readable storage medium
CN114219729A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114444451A (en) Remote annotation method and device
CN113794943A (en) Video cover setting method and device, electronic equipment and storage medium
CN114374798B (en) Scene recognition method, device, electronic equipment and computer readable storage medium
CN113742585B (en) Content searching method, device, electronic equipment and computer readable storage medium
CN117437134A (en) Image processing method, device, electronic equipment and computer storage medium
CN117412182A (en) Image processing method, device, electronic equipment and computer storage medium
CN114390345B (en) Video generation method, device, electronic equipment and computer readable storage medium
CN114416937B (en) Man-machine interaction method, device, equipment, storage medium and computer program product
CN113420176B (en) Question searching method, question frame drawing device, question searching equipment and storage medium
CN113489901B (en) Shooting method and device thereof
CN117409219A (en) Image matching method, device, electronic equipment and computer storage medium
CN117440178A (en) Video processing method, video processing device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination