CN113225451A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113225451A
CN113225451A CN202110469948.1A CN202110469948A CN113225451A CN 113225451 A CN113225451 A CN 113225451A CN 202110469948 A CN202110469948 A CN 202110469948A CN 113225451 A CN113225451 A CN 113225451A
Authority
CN
China
Prior art keywords
image
texture
target
definition
texture information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110469948.1A
Other languages
Chinese (zh)
Other versions
CN113225451B (en
Inventor
张人众
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110469948.1A priority Critical patent/CN113225451B/en
Publication of CN113225451A publication Critical patent/CN113225451A/en
Application granted granted Critical
Publication of CN113225451B publication Critical patent/CN113225451B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image processing method and device and electronic equipment, and belongs to the field of image processing. The image processing method comprises the steps of acquiring a first image; determining at least one target object from the first image; acquiring target reference texture information corresponding to a target object based on the historical image; under the condition that the definition of the first texture is smaller than that of the second texture, processing an image area corresponding to the target object in the first image according to the target reference texture information; the first texture definition is the texture definition of an image area corresponding to the target object in the first image, and the second texture definition is the texture definition corresponding to the target reference texture information.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the field of image processing, and particularly relates to an image processing method and device and electronic equipment.
Background
With the popularization of the photographing function of the electronic device, people tend to record life by photographing through the electronic device such as a mobile phone or a camera, however, photos taken by the electronic device that is not specially photographed often cannot meet the requirements of people on the quality of the photos, and therefore, how to improve the quality of the photos becomes an urgent problem to be solved.
In the prior art, the electronic device often utilizes a conventional image algorithm to improve the picture quality of a photo, that is, the definition of an image is improved by means of multi-frame stacking, filtering and the like based on image features and statistical information. However, the method for improving the image quality by the conventional image algorithm still has the problems of poor image quality improvement effect and low definition.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, and the problems that in the prior art, the image quality of a shot image is poor in improvement effect and low in definition can be solved.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a first image;
determining at least one target object from the first image;
acquiring target reference texture information corresponding to a target object based on the historical image;
under the condition that the definition of the first texture is smaller than that of the second texture, processing an image area corresponding to the target object in the first image according to the target reference texture information;
the first texture definition is the texture definition of an image area corresponding to the target object in the first image, and the second texture definition is the texture definition corresponding to the target reference texture information.
In a second aspect, an embodiment of the present application provides an apparatus for image processing, including:
the first acquisition module is used for acquiring a first image;
a first determination module for determining at least one target object from the first image;
the second acquisition module is used for acquiring target reference texture information corresponding to the target object based on the historical image;
the first processing module is used for processing an image area corresponding to the target object in the first image according to the target reference texture information under the condition that the definition of the first texture is smaller than that of the second texture;
the first texture definition is the texture definition of an image area corresponding to the target object in the first image, and the second texture definition is the texture definition corresponding to the target reference texture information.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the image quality improvement processing is performed on the image area corresponding to the target object in the current first image according to the target reference texture information corresponding to the target object, which is acquired according to the historical image, so that the high-definition completion can be performed on the current first image based on the information such as the object texture accumulated when the user previously shoots the image, thereby pertinently improving the image quality of the object in the image, further improving the image quality and the definition of the whole image, and realizing the personalized image quality improvement.
Drawings
FIG. 1 is one of the flow diagrams of an image processing method shown in accordance with an exemplary embodiment;
FIG. 2 is a schematic diagram of an example image shown in accordance with an example embodiment;
FIG. 3 is a second schematic diagram illustrating an image processing method according to an exemplary embodiment;
FIG. 4 is one of the schematic diagrams of image segmentation shown in accordance with an exemplary embodiment;
FIG. 5 is a second schematic diagram illustrating image segmentation in accordance with an exemplary embodiment;
FIG. 6 is a schematic diagram of a first set of images shown in accordance with an exemplary embodiment;
FIG. 7 is a diagram illustrating reference texture information in accordance with an exemplary embodiment;
FIG. 8 is a third schematic diagram illustrating an image processing method in accordance with an exemplary embodiment;
FIG. 9 is a schematic diagram illustrating reference images having the same view angle in accordance with an example embodiment;
FIG. 10 is a schematic diagram illustrating reference images having partially identical view angles in accordance with an exemplary embodiment;
fig. 11 is a block diagram showing a configuration of an image processing apparatus according to an exemplary embodiment;
FIG. 12 is a block diagram illustrating the structure of an electronic device in accordance with an exemplary embodiment;
fig. 13 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method and the electronic device provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The image processing method provided by the application can be applied to scenes for carrying out optimization processing on shot photos. In addition, according to the image processing method provided by the embodiment of the application, the execution subject can be the image processing device, or a control module used for executing the image processing method in the image processing device. In the embodiment of the present application, an image processing method executed by an image processing apparatus is taken as an example, and the image processing method provided in the embodiment of the present application is described.
FIG. 1 is a flow diagram illustrating an image processing method according to an example embodiment.
As shown in fig. 1, the image processing method may include the steps of:
step 110, a first image is acquired.
The first image in the embodiment of the application may be an image currently acquired by a camera arranged in the electronic device, or may be an image to be processed stored in the electronic device. Accordingly, the first image may be acquired by a camera of the electronic device, or may be acquired directly from an image database of the electronic device. The electronic device may be, for example, a mobile phone, a tablet, a camera, or other devices with a photographing function.
At least one target object is determined 120 from the first image.
The target object may be an object displayed in the first image, and the object includes, but is not limited to, an animal, an object, a human face, and the like. In a specific example, as shown in fig. 2, three target objects of fish, a fish tank, and a table exist in the image.
Illustratively, the first image may be recognized through a preset recognition algorithm to obtain all objects included therein as target objects; the first image can also be subjected to saliency detection through a preset saliency detection algorithm, and some objects which are relatively salient are identified and obtained to serve as target objects.
Step 130, obtaining target reference texture information corresponding to the target object based on the historical image.
The historical image may be a photograph taken historically in a user album in the electronic device, and the target reference texture information corresponding to the target object is obtained through the historical photograph in the user album, the target reference texture information may be information such as patterns, patterns and grooves on the surface of the target object, and the target reference texture information may be formed by one or more two-dimensional figures describing details of the surface of the target object. Wherein the one or several two-dimensional graphics are also called texture maps.
In an alternative implementation manner, before step 110, the method provided in the embodiment of the present application may further include:
acquiring a historical image;
acquiring reference texture information corresponding to at least one object from a historical image;
accordingly, the step 130 may include:
and acquiring target reference texture information corresponding to the target object from the reference texture information under the condition that the target object is determined to be included in the at least one object.
Here, one or more objects included in one or more history images acquired from the user album may be obtained by dividing the one or more history images, and then reference texture information corresponding to each object may be acquired. The reference texture information may be the patterns, grooves, etc. of the surface of the object, and the reference texture information may be composed of texture maps describing the surface details of the corresponding object.
In one embodiment, from a plurality of historical images similar to that shown in fig. 2, a texture map corresponding to a fish tank, and a texture map corresponding to a table can be extracted.
In this way, the texture features of each object in the scene which is frequently shot by the user are obtained by analyzing the historical images, so that for the first image shot in the same scene, the texture features can be referred to improve the texture of the target object in the first image, and the image quality of the first image can be improved more specifically.
In addition, in an optional implementation manner, in a case where it is determined that the target object is not included in the at least one object, the image processing method related to the foregoing may further include:
the first image is stored as a history image.
Here, when at least one object in the history image does not include the target object, it is described that the target object is a new object that has not appeared, and the first image is stored as the history image, for example, in the user album.
Therefore, the object material library can be enriched and improved continuously by continuously storing the images containing the new objects, so that when a user shoots, the currently shot images can be supplemented in a high-definition mode by utilizing the information such as the accumulated object textures, and the picture quality of shooting is improved.
In an alternative implementation manner, after step 110, the method provided in the embodiment of the present application may further include:
acquiring a first shooting position corresponding to the first image;
accordingly, the step of obtaining the target reference texture information corresponding to the target object from the reference texture information when it is determined that the target object is included in the at least one object may specifically include:
determining at least one first object corresponding to the first photographing position;
in a case where it is determined that the target object is included in the at least one first object, target reference texture information corresponding to the target object is acquired from first reference texture information of the at least one first object.
Here, the first photographing position may be a position of the electronic device located by the user when the user photographs the first image, and a position where the electronic device is currently located may be recorded as the first photographing position corresponding to the first image when the user photographs the first image. The first object may be an object once photographed in a photographing scene corresponding to the first photographing position.
For example, on the premise that an object reference texture information database is established, when a user takes a picture, the device may first pre-load reference texture information data of a corresponding location according to a current shooting location, and then narrow a search range of target reference texture information corresponding to a target object based on the pre-loaded reference texture information.
Therefore, by preloading the reference texture information data corresponding to the shooting location, the time required for inquiring the target reference texture information can be effectively shortened, and the difficulty in searching the target reference texture information corresponding to the target object is also reduced.
In addition, before the step of acquiring the history image, the method provided by the embodiment of the present application may further include:
a capture prompt for at least one subject is displayed.
Here, the photographing prompt may be used to guide the user to photograph at least one subject.
Therefore, the user can be reminded to shoot the object in each direction, and materials of the object can be supplemented in a targeted manner, so that clearer reference texture information can be obtained.
And 140, processing an image area corresponding to the target object in the first image according to the target reference texture information under the condition that the definition of the first texture is smaller than that of the second texture.
The first texture definition is the texture definition of an image area corresponding to the target object in the first image, and the second texture definition is the texture definition corresponding to the target reference texture information. That is, when the texture sharpness of the image region corresponding to the target object in the first image is smaller than the texture sharpness corresponding to the target reference texture information, the image region corresponding to the target object in the first image may be processed, the target reference texture information corresponding to the target object may be directly overlaid in the image region corresponding to the target object in the first image, or the image region corresponding to the target object in the first image may be subjected to high-definition enhancement by using a high-definition enhancement algorithm based on the target reference texture information, so as to improve the sharpness of the target object in the first image.
In an optional implementation manner, the processing, according to the reference texture information, an image region corresponding to the target object in the first image in step 140 may specifically include:
and taking the target reference texture information as prior information, and performing enhancement processing on an image area corresponding to the target object in the first image.
Here, the target object may be high-definition enhanced using target reference texture information corresponding to the target object as a priori knowledge of a high-definition enhancement algorithm, thereby improving the definition of the object.
Because the reference texture information stored in the database and the texture in the currently shot image may have the difference of external conditions such as illumination and the like, compared with a mode of directly covering a clearer target reference texture image to a corresponding position, a mode of enhancing the first image by using the target reference texture information as algorithm priori knowledge can avoid the occurrence of a sense of incongruity, and the image quality enhancement effect is more natural.
In addition, in an optional implementation manner, in a case where the first texture definition is not less than the second texture definition, the image processing method related to the foregoing may further include:
storing the first image as a history image;
and deleting the target reference texture information.
For example, after the user takes the first image, the first image currently taken by the user may be segmented and identified to determine whether the a object belongs to an object existing in the current database. And if the object A in the first image belongs to the existing object in the current database, comparing whether the texture definition of the object A is superior to the definition of the reference texture corresponding to the object A in the existing database. If the texture of the object A in the first image is superior to the existing reference texture, the reference texture information corresponding to the new texture is updated to the existing database, and the corresponding old reference texture information is deleted. And if the texture of the object A in the first image is inferior to the existing reference texture, the database is not updated, the reference texture information corresponding to the object A is taken out from the database, and the optimization processing is carried out on the area corresponding to the object A in the first image.
Therefore, the image quality improvement processing is carried out on the image area corresponding to the target object in the current first image through the target reference texture information corresponding to the target object, which is acquired according to the historical image, so that the high-definition completion can be carried out on the current first image based on the information such as the object texture and the like accumulated when the user shoots the image before, the image quality of the object in the image is pertinently improved, the image quality and the definition of the whole image can be improved, and the personalized image quality improvement is realized.
On the basis of the foregoing embodiment, in one possible embodiment, as shown in fig. 3, when the number of the history images is multiple, the step of obtaining the reference texture information corresponding to at least one object from the history images may specifically include steps 310 to 330, which are specifically as follows:
step 310, performing object segmentation on the plurality of historical images to obtain a plurality of reference images corresponding to at least one object.
Here, one object corresponds to at least one reference image. The reference image may be an image of a corresponding region of the object in the history image. A historical image can be segmented into N reference images according to N objects contained in the historical image, wherein N is a positive integer.
In a specific example, as shown in fig. 4, if there is an image 1 in the history image, a preset semantic segmentation algorithm may be used to segment the object in the image 1, so as to obtain a series of independent reference images corresponding to the object, such as a reference image 11 corresponding to the object "fish", a reference image 12 corresponding to the object "fish tank", and a reference image 13 corresponding to the object "table". As shown in fig. 5, if there is an image 2 in the history image, a preset semantic segmentation algorithm may be used to segment the object in the image 2 to obtain a series of independent reference images corresponding to the object, such as a reference image 21 corresponding to the object "fish", a reference image 22 corresponding to the object "fish tank", and a reference image 23 corresponding to the object "table".
In an optional implementation manner, the step 310 may specifically include:
classifying the plurality of historical images according to shooting positions to obtain at least one second image set;
and performing object segmentation on the plurality of historical images based on the second image set to obtain a plurality of reference images corresponding to at least one object.
For example, before classifying the reference images divided from the history images according to the objects, the plurality of history images are classified according to the shooting positions of the history images to obtain at least one second image set, and then the plurality of history images included in the second image set are subjected to operations such as division processing and the like based on the second image set.
As a specific example, 100 history images exist in the user album, the user album can be read from the background by an electronic device such as a mobile phone, and if it is found that the shooting position of 23 history images is in the office of the user, a classification set is obtained according to the 23 history images from the position information read from the 100 history images. When the position information read from 100 history images shows that the shooting positions of 50 history images are at home, a classification set is obtained according to the 50 history images, and a plurality of second image sets can be obtained by analogy. In addition, if only 1 photo is taken at a certain position, the photo is ignored and does not need to be classified.
In this way, since the objects obtained by shooting in the scenes with the same or similar shooting places are generally the same, by classifying the historical images based on the geographic positions in the same or similar range, when the current shot image is processed, only the historical image corresponding to the shooting position with the same or similar shooting place needs to be analyzed, and excessive data in other scenes does not need to be analyzed, so that the time for image processing can be saved.
Step 320, classifying the plurality of reference images according to at least one object to obtain at least one first image set.
Here, the reference images belonging to a first image set correspond to the same object, that is, a first image set corresponds to an object, and the first image set may include a plurality of reference images corresponding to the object.
For example, after the segmentation is completed, the reference images corresponding to the same object may be classified into the same type of reference image according to the features of the object, such as the contour, the color, and the like, so as to obtain at least one first image set.
In a specific example, the 6 reference images shown in fig. 4 and 5 are classified, and the 6 reference images can be divided into three image sets shown in fig. 6, namely, an image set 31 corresponding to "fish", an image set 32 corresponding to "fish tank", and an image set 33 corresponding to "table", according to three objects "fish", "fish tank", and "table". By analogy, images shot in the same scene are segmented one by one, a plurality of reference images corresponding to the objects of the same class are stored in the same database, and a plurality of first image sets can be obtained. Thus, along with the accumulation of photo album photos, the database corresponding to the same object can be improved.
Step 330, obtaining reference texture information corresponding to at least one object based on the first image set.
For example, the reference texture information corresponding to the object of the reference image may be obtained from the same type of reference image, for example, when there is a reference image in each view, the texture maps of the view shown in fig. 7 corresponding to the object may be extracted from the reference image. In this way, the reference texture information corresponding to the M objects can be correspondingly obtained based on the M first image sets, where M is a positive integer.
Therefore, after the reference images corresponding to the objects are obtained by segmenting the plurality of historical images, the reference images are classified according to the objects, and then the reference texture information corresponding to the objects can be more conveniently and quickly acquired based on the at least one first image set obtained after classification, so that excessive data do not need to be analyzed when the photos are processed in the follow-up process, and a large amount of time can be saved.
Based on this, in one possible embodiment, as shown in fig. 8, before step 330, the image processing method provided in the embodiment of the present application may further include:
step 810, a first reference image and a second reference image are obtained from the first image set.
After the first image set contains more image data, the reference images in the first image set can be reduced and integrated, and then the reference texture information of the corresponding object of the set is extracted based on the sorted first image set.
Here, any two reference images may be acquired from each first image set. Wherein the first reference picture and the second reference picture may be any two pictures in the first set of pictures.
In step 820, the similarity between the first reference image and the second reference image is determined.
Here, two reference images having almost the same shooting angle but different degrees of sharpness may be first found out from the first image set. Specifically, whether the shooting angles corresponding to the two reference images are the same can be determined by calculating the similarity between the first reference image and the second reference image.
In an optional implementation manner, the step 820 may specifically include:
taking the second reference image as a reference image, and performing feature alignment on the first reference image to obtain a third reference image;
a similarity between the third reference picture and the second reference picture is determined.
For example, the feature points of the first reference image may be aligned to the coordinates of the corresponding feature points of the second reference image, and then other pixel points of the first reference image are mapped to the corresponding coordinates one by one, that is, the first reference image may be transformed into a third reference image, and then the similarity between the third reference image and the second reference image may be determined by calculating the overlapping rate between the third reference image and the second reference image.
In a specific example, if the overlap ratio between the objects in the third reference image and the second reference image reaches 90% after the feature transformation, it may be determined that the similarity between the third reference image and the second reference image reaches 90%.
Therefore, the similarity is calculated in a characteristic alignment mode, so that the two reference images can be compared under the same visual angle, the comparison process is simpler and easier, and the obtained comparison result is more accurate.
And step 830, deleting the image with lower definition from the first reference image and the second reference image under the condition that the similarity is greater than a first preset threshold value.
Here, when the similarity is greater than the first preset threshold, it is described that the shooting angles of the first reference image and the second reference image are almost the same, and at this time, the definitions of the first reference image and the second reference image may be compared, and the reference image with the lower definition may be deleted.
In a specific example, as shown in fig. 9, the reference image 91 and the reference image 92 have almost the same viewing angle, i.e. the similarity between the two is greater than 90%, but the sharpness of the reference image 92 is significantly better than that of the reference image 91, and at this time, the reference image 91 should be deleted and the reference image 92 should be kept.
In an optional implementation manner, in a case that the similarity is not greater than the first preset threshold and is greater than the second preset threshold, after determining the similarity between the third reference image and the second reference image, the image processing method provided in the embodiment of the present application may further include:
acquiring a first region in a third reference image and a second region in a second reference image; wherein the first region and the second region can coincide;
deleting a region with lower definition from the first region and the second region;
the first preset threshold is larger than the second preset threshold.
Here, the first preset threshold and the second preset threshold may be thresholds that are set based on experience or needs. If the first area in the third reference image can be overlapped with the second area in the second reference image, the area with lower definition is deleted from the first area and the second area, and other areas which cannot be overlapped are reserved.
In a specific example, as shown in fig. 10, an image obtained by transforming the reference image 101 partially overlaps the reference image 102, and the reference image 102 is clearer, so that the reference image 102 is retained as a map of the fish body. However, the reference image 101 also has a portion (e.g., a fish-mouth portion) that cannot be displayed by the reference image 102. In the absence of other more clear fishmouth part reference images, the fishmouth part of the reference image 101 may be used as a texture map for that part. By analogy, a more comprehensive texture map of the corresponding object can be obtained according to the first image set, namely reference texture information.
In addition, it should be noted that when the user takes a new image, that is, a new photo of the same object is supplemented in the user album, the reference texture information (for example, texture map) corresponding to the object in the new image may be continuously supplemented based on the new image.
In this way, by continuously reducing and integrating the reference images in the first image set corresponding to each object, the reference texture information in the database corresponding to the object can be continuously optimized, so that the optimization of the image quality of the image can be continuously improved, and the image quality improvement effect is improved.
It should be noted that the application scenarios described in the embodiment of the present disclosure are for more clearly illustrating the technical solutions of the embodiment of the present disclosure, and do not constitute a limitation on the technical solutions provided in the embodiment of the present disclosure, and as a new application scenario appears, a person skilled in the art may know that the technical solutions provided in the embodiment of the present disclosure are also applicable to similar technical problems.
Based on the same inventive concept, the application also provides an image processing device. The following describes the file sharing apparatus provided in this embodiment in detail with reference to fig. 11.
Fig. 11 is a block diagram illustrating a configuration of an image processing apparatus according to an exemplary embodiment.
As shown in fig. 11, the image processing apparatus may include:
a first obtaining module 1110, configured to obtain a first image;
a first determining module 1120 for determining at least one target object from the first image;
a second obtaining module 1130, configured to obtain target reference texture information corresponding to the target object based on the historical image;
a first processing module 1140, configured to process an image area corresponding to the target object in the first image according to the target reference texture information when the first texture definition is smaller than the second texture definition;
the first texture definition is the texture definition of an image area corresponding to the target object in the first image, and the second texture definition is the texture definition corresponding to the target reference texture information.
The following describes the image processing apparatus in detail, specifically as follows:
in one embodiment, the apparatus further comprises:
the third acquisition module is used for acquiring the historical image before acquiring the first image;
the fourth acquisition module is used for acquiring reference texture information corresponding to at least one object from the historical image;
accordingly, the second obtaining module 1130 includes:
and the first obtaining sub-module is used for obtaining target reference texture information corresponding to the target object from the reference texture information under the condition that the target object is determined to be included in at least one object.
In one embodiment, in the case that it is determined that the target object is not included in the at least one object, or the first texture definition is not less than the second texture definition, the apparatus further includes:
the storage module is used for storing the first image as a historical image;
and the deleting module is used for deleting the target reference texture information under the condition that the definition of the first texture is not less than that of the second texture.
In one embodiment, the fourth obtaining module may specifically include:
the segmentation submodule is used for carrying out object segmentation on the plurality of historical images to obtain a plurality of reference images corresponding to at least one object; wherein, one object corresponds to at least one reference image;
the classification submodule is used for classifying the plurality of reference images according to at least one object to obtain at least one first image set; wherein, the reference images belonging to a first image set correspond to the same object.
And the second obtaining sub-module is used for obtaining the reference texture information corresponding to at least one object based on the first image set.
In one embodiment, the partitioning sub-module may specifically include:
the classification unit is used for classifying the plurality of historical images according to shooting positions to obtain at least one second image set;
and the segmentation unit is used for carrying out object segmentation on the plurality of historical images based on the second image set to obtain a plurality of reference images corresponding to at least one object.
In one embodiment, the fourth obtaining module may further include:
the third obtaining sub-module is used for obtaining a first reference image and a second reference image from the first image set before obtaining the reference texture information corresponding to at least one object based on the first image set;
the similarity determining submodule is used for determining the similarity between the first reference image and the second reference image;
and the first deleting submodule is used for deleting the image with lower definition from the first reference image and the second reference image under the condition that the similarity is greater than a first preset threshold value.
In one embodiment, the similarity determination sub-module may specifically include:
the feature alignment unit is used for performing feature alignment on the first reference image by taking the second reference image as a reference image to obtain a third reference image;
a first determining unit for determining a similarity between the third reference image and the second reference image.
In one embodiment, in a case that the similarity is not greater than the first preset threshold and is greater than the second preset threshold, the fourth obtaining module may further include:
the fourth obtaining submodule is used for obtaining a first area in the third reference image and a second area in the second reference image after the similarity between the third reference image and the second reference image is determined; wherein the first region and the second region can coincide;
a second deletion submodule for deleting a region with lower definition from the first region and the second region;
the first preset threshold is larger than the second preset threshold.
In one embodiment, the first processing module 1140 may specifically include:
and the enhancement submodule is used for enhancing the image area corresponding to the target object in the first image by taking the target reference texture information as prior information.
In one embodiment, the apparatus may further include:
and the fifth acquisition module is used for acquiring a first shooting position corresponding to the first image after the first image is acquired.
Accordingly, the first obtaining sub-module may include:
a loading unit for determining at least one first object corresponding to a first photographing position;
an obtaining unit, configured to, in a case where it is determined that the target object is included in the at least one first object, obtain target reference texture information corresponding to the target object from first reference texture information of the at least one first object.
In one embodiment, the apparatus may further include:
the display module is used for displaying a shooting prompt aiming at least one object before the historical image is acquired, and the shooting prompt is used for guiding a user to shoot aiming at the at least one object.
Therefore, the image quality improvement processing is carried out on the image area corresponding to the target object in the current first image through the target reference texture information corresponding to the target object, which is acquired according to the historical image, so that the high-definition completion can be carried out on the current first image based on the information such as the object texture and the like accumulated when the user shoots the image before, the image quality of the object in the image is pertinently improved, the image quality and the definition of the whole image can be improved, and the personalized image quality improvement is realized.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 2 to fig. 10, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 12, an electronic device 1200 is further provided in an embodiment of the present application, and includes a processor 1201, a memory 1202, and a program or an instruction stored in the memory 1202 and executable on the processor 1201, where the program or the instruction is executed by the processor 1201 to implement each process of the image processing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 13 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1300 includes, but is not limited to: a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309, a processor 1310, and the like.
Those skilled in the art will appreciate that the electronic device 1300 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1310 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. Drawing (A)13The electronic device structures shown in the figures do not constitute limitations of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The input unit 1304 is used for acquiring a first image;
a processor 1310 that determines at least one target object from the first image; acquiring target reference texture information corresponding to the target object based on the historical image; and under the condition that the definition of the first texture is smaller than that of the second texture, processing an image area corresponding to the target object in the first image according to the target reference texture information.
Therefore, the image quality improvement processing is carried out on the image area corresponding to the target object in the current first image through the target reference texture information corresponding to the target object, which is acquired according to the historical image, so that the high-definition completion can be carried out on the current first image based on the information such as the object texture and the like accumulated when the user shoots the image before, the image quality of the object in the image is pertinently improved, the image quality and the definition of the whole image can be improved, and the personalized image quality improvement is realized.
Optionally, the processor 1310 is specifically configured to acquire a history image; acquiring reference texture information corresponding to at least one object from the historical image; also, in a case where it is determined that the target object is included in the at least one object, target reference texture information corresponding to the target object is acquired from the reference texture information.
Optionally, the memory 1309 is specifically configured to store the first image as a history image; and deleting the target reference texture information under the condition that the definition of the first texture is not less than the definition of the second texture.
Optionally, the processor 1310 is specifically configured to perform object segmentation on the plurality of history images to obtain a plurality of reference images corresponding to at least one object; classifying the plurality of reference images according to at least one object to obtain at least one first image set; and acquiring reference texture information corresponding to at least one object based on the first image set.
Optionally, the processor 1310 is specifically configured to classify the plurality of history images according to shooting positions to obtain at least one second image set; and performing object segmentation on the plurality of historical images based on the second image set to obtain a plurality of reference images corresponding to at least one object.
Optionally, the processor 1310 is specifically further configured to obtain a first reference image and a second reference image from the first image set; determining a similarity between the first reference image and the second reference image; and deleting the image with lower definition from the first reference image and the second reference image under the condition that the similarity is greater than a first preset threshold value.
Optionally, the processor 1310 is specifically configured to use the second reference image as a reference image, and perform feature alignment on the first reference image to obtain a third reference image; and determining a similarity between the third reference image and the second reference image.
Optionally, the processor 1310 is specifically further configured to obtain a first region in the third reference image and a second region in the second reference image; the region with lower definition is deleted from the first region and the second region.
Optionally, the processor 1310 is specifically further configured to perform enhancement processing on an image region corresponding to the target object in the first image, using the target reference texture information as the prior information.
Optionally, the processor 1310 is specifically further configured to acquire a first shooting position corresponding to the first image; the method includes determining at least one first object corresponding to a first photographing position, and, in a case where it is determined that a target object is included in the at least one first object, acquiring target reference texture information corresponding to the target object from first reference texture information of the at least one first object.
Optionally, the display unit 1306 is specifically configured to display a shooting prompt for at least one object, where the shooting prompt is used to guide a user to shoot for the at least one object.
Therefore, after the shooting positions based on the historical images are classified once to obtain at least one second image set, the plurality of historical images in the second image set are segmented, the first image set is obtained through classification again, and then the reference texture information corresponding to the object can be conveniently and quickly obtained through the first image set, so that excessive data do not need to be analyzed in the subsequent photo processing, and a large amount of time can be saved.
It should be understood that in the embodiment of the present application, the input Unit 1304 may include a Graphics Processing Unit (GPU) 13041 and a microphone 1342, and the Graphics processor 13041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1306 may include a display panel 13061, and the display panel 13061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1307 includes a touch panel 13071 and other input devices 13072. A touch panel 13071, also referred to as a touch screen. The touch panel 13071 may include two parts, a touch detection device and a touch controller. Other input devices 13072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 1309 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. The processor 1310 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1310.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An image processing method, comprising:
acquiring a first image;
determining at least one target object from the first image;
acquiring target reference texture information corresponding to the target object based on the historical image;
under the condition that the definition of a first texture is smaller than that of a second texture, processing an image area corresponding to the target object in the first image according to the target reference texture information;
the first texture definition is texture definition of an image area corresponding to the target object in the first image, and the second texture definition is texture definition corresponding to the target reference texture information.
2. The method of claim 1, wherein prior to acquiring the first image, the method further comprises:
acquiring a historical image;
acquiring reference texture information corresponding to at least one object from the historical image;
the obtaining of the target reference texture information corresponding to the target object based on the historical image includes:
and under the condition that the target object is determined to be included in the at least one object, acquiring target reference texture information corresponding to the target object from the reference texture information.
3. The method of claim 2, wherein in the event that it is determined that the target object is not included in the at least one object or that the first texture definition is not less than the second texture definition, the method further comprises:
storing the first image as a history image;
deleting the target reference texture information if the first texture definition is not less than the second texture definition.
4. The method according to claim 2, wherein in a case that the number of the history images is multiple, the obtaining reference texture information corresponding to at least one object from the history images comprises:
carrying out object segmentation on the plurality of historical images to obtain a plurality of reference images corresponding to at least one object; wherein one of the objects corresponds to at least one of the reference images;
classifying the plurality of reference images according to the at least one object to obtain at least one first image set; wherein, the objects corresponding to the reference images belonging to one first image set are the same;
and acquiring reference texture information corresponding to the at least one object based on the first image set.
5. The method of claim 4, wherein the object segmenting the plurality of historical images into a plurality of reference images corresponding to at least one object comprises:
classifying the plurality of historical images according to shooting positions to obtain at least one second image set;
and performing object segmentation on the plurality of historical images based on the second image set to obtain a plurality of reference images corresponding to at least one object.
6. The method according to claim 4, wherein before obtaining the reference texture information corresponding to the at least one object based on the first set of images, the method further comprises:
acquiring a first reference image and a second reference image from the first image set;
determining a similarity between the first reference image and the second reference image;
and deleting the image with lower definition from the first reference image and the second reference image under the condition that the similarity is greater than a first preset threshold value.
7. The method of claim 6, wherein the determining the similarity between the first reference picture and the second reference picture comprises:
taking the second reference image as a reference image, and performing feature alignment on the first reference image to obtain a third reference image;
determining a similarity between the third reference picture and the second reference picture.
8. The method according to claim 7, wherein in a case where the similarity is not greater than the first preset threshold and greater than a second preset threshold, after determining the similarity between the third reference image and the second reference image, the method further comprises:
acquiring a first region in the third reference image and a second region in the second reference image; wherein the first region and the second region are capable of coinciding;
deleting a region with lower definition from the first region and the second region;
wherein the first preset threshold is greater than the second preset threshold.
9. The method according to claim 1, wherein the processing an image region corresponding to the target object in the first image according to the target reference texture information comprises:
and taking the target reference texture information as prior information, and performing enhancement processing on an image area corresponding to the target object in the first image.
10. The method of claim 2, wherein after acquiring the first image, the method further comprises:
acquiring a first shooting position corresponding to the first image;
the obtaining, from the reference texture information, target reference texture information corresponding to the target object when it is determined that the target object is included in the at least one object, includes:
determining at least one first object corresponding to the first photographing position;
and under the condition that the target object is determined to be included in the at least one first object, acquiring target reference texture information corresponding to the target object from the first reference texture information of the at least one first object.
11. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring a first image;
a first determination module for determining at least one target object from the first image;
a second obtaining module, configured to obtain, based on a historical image, target reference texture information corresponding to the target object;
the first processing module is used for processing an image area corresponding to the target object in the first image according to the target reference texture information under the condition that the definition of the first texture is smaller than that of the second texture;
the first texture definition is texture definition of an image area corresponding to the target object in the first image, and the second texture definition is texture definition corresponding to the target reference texture information.
12. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 10.
CN202110469948.1A 2021-04-28 2021-04-28 Image processing method and device and electronic equipment Active CN113225451B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110469948.1A CN113225451B (en) 2021-04-28 2021-04-28 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110469948.1A CN113225451B (en) 2021-04-28 2021-04-28 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113225451A true CN113225451A (en) 2021-08-06
CN113225451B CN113225451B (en) 2023-06-27

Family

ID=77089796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110469948.1A Active CN113225451B (en) 2021-04-28 2021-04-28 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113225451B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125290A (en) * 2021-11-22 2022-03-01 维沃移动通信有限公司 Shooting method and device
CN115131698A (en) * 2022-05-25 2022-09-30 腾讯科技(深圳)有限公司 Video attribute determination method, device, equipment and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050010602A1 (en) * 2000-08-18 2005-01-13 Loui Alexander C. System and method for acquisition of related graphical material in a digital graphics album
JP2007199849A (en) * 2006-01-24 2007-08-09 Canon Inc Image processor and processing method, computer program, computer-readable recording medium, and image formation system
CN106777007A (en) * 2016-12-07 2017-05-31 北京奇虎科技有限公司 Photograph album Classified optimization method, device and mobile terminal
CN110109878A (en) * 2018-01-10 2019-08-09 广东欧珀移动通信有限公司 Photograph album management method, device, storage medium and electronic equipment
CN110175254A (en) * 2018-09-30 2019-08-27 广东小天才科技有限公司 A kind of the classification storage method and wearable device of photo
CN110910330A (en) * 2019-11-29 2020-03-24 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
WO2020073505A1 (en) * 2018-10-11 2020-04-16 平安科技(深圳)有限公司 Image processing method, apparatus and device based on image recognition, and storage medium
CN111031241A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method and device, terminal and computer readable storage medium
CN111062904A (en) * 2019-12-09 2020-04-24 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111105368A (en) * 2019-12-09 2020-05-05 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111488477A (en) * 2019-01-25 2020-08-04 中国科学院半导体研究所 Album processing method, apparatus, electronic device and storage medium
CN111523346A (en) * 2019-02-01 2020-08-11 深圳市商汤科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN111553864A (en) * 2020-04-30 2020-08-18 深圳市商汤科技有限公司 Image restoration method and device, electronic equipment and storage medium
CN111695512A (en) * 2020-06-12 2020-09-22 嘉应学院 Unattended cultural relic monitoring method and device
CN111698553A (en) * 2020-05-29 2020-09-22 维沃移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium
CN111726533A (en) * 2020-06-30 2020-09-29 RealMe重庆移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
US20200380660A1 (en) * 2018-07-12 2020-12-03 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, computer-readable medium, and electronic device
CN112508820A (en) * 2020-12-18 2021-03-16 维沃移动通信有限公司 Image processing method and device and electronic equipment

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050010602A1 (en) * 2000-08-18 2005-01-13 Loui Alexander C. System and method for acquisition of related graphical material in a digital graphics album
JP2007199849A (en) * 2006-01-24 2007-08-09 Canon Inc Image processor and processing method, computer program, computer-readable recording medium, and image formation system
CN106777007A (en) * 2016-12-07 2017-05-31 北京奇虎科技有限公司 Photograph album Classified optimization method, device and mobile terminal
CN110109878A (en) * 2018-01-10 2019-08-09 广东欧珀移动通信有限公司 Photograph album management method, device, storage medium and electronic equipment
US20200380660A1 (en) * 2018-07-12 2020-12-03 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, computer-readable medium, and electronic device
CN110175254A (en) * 2018-09-30 2019-08-27 广东小天才科技有限公司 A kind of the classification storage method and wearable device of photo
WO2020073505A1 (en) * 2018-10-11 2020-04-16 平安科技(深圳)有限公司 Image processing method, apparatus and device based on image recognition, and storage medium
CN111488477A (en) * 2019-01-25 2020-08-04 中国科学院半导体研究所 Album processing method, apparatus, electronic device and storage medium
CN111523346A (en) * 2019-02-01 2020-08-11 深圳市商汤科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN110910330A (en) * 2019-11-29 2020-03-24 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111062904A (en) * 2019-12-09 2020-04-24 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111105368A (en) * 2019-12-09 2020-05-05 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111031241A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method and device, terminal and computer readable storage medium
CN111553864A (en) * 2020-04-30 2020-08-18 深圳市商汤科技有限公司 Image restoration method and device, electronic equipment and storage medium
CN111698553A (en) * 2020-05-29 2020-09-22 维沃移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium
CN111695512A (en) * 2020-06-12 2020-09-22 嘉应学院 Unattended cultural relic monitoring method and device
CN111726533A (en) * 2020-06-30 2020-09-29 RealMe重庆移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN112508820A (en) * 2020-12-18 2021-03-16 维沃移动通信有限公司 Image processing method and device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125290A (en) * 2021-11-22 2022-03-01 维沃移动通信有限公司 Shooting method and device
CN115131698A (en) * 2022-05-25 2022-09-30 腾讯科技(深圳)有限公司 Video attribute determination method, device, equipment and storage medium
CN115131698B (en) * 2022-05-25 2024-04-12 腾讯科技(深圳)有限公司 Video attribute determining method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113225451B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN106570110B (en) Image duplicate removal method and device
EP3125135B1 (en) Picture processing method and device
US10007841B2 (en) Human face recognition method, apparatus and terminal
US20200082851A1 (en) Bounding box doubling as redaction boundary
CN110889379B (en) Expression package generation method and device and terminal equipment
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN111241872B (en) Video image shielding method and device
CN113225451B (en) Image processing method and device and electronic equipment
CN112135046A (en) Video shooting method, video shooting device and electronic equipment
CN112492201B (en) Photographing method and device and electronic equipment
CN111669495B (en) Photographing method, photographing device and electronic equipment
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN111787230A (en) Image display method and device and electronic equipment
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
US9286707B1 (en) Removing transient objects to synthesize an unobstructed image
CN112887615A (en) Shooting method and device
CN112330728A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113271378B (en) Image processing method and device and electronic equipment
CN113271379B (en) Image processing method and device and electronic equipment
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN113012085A (en) Image processing method and device
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN113056905A (en) System and method for taking tele-like images
US11776237B2 (en) Mitigating people distractors in images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant