CN114693511A - Picture completion method and electronic equipment - Google Patents

Picture completion method and electronic equipment Download PDF

Info

Publication number
CN114693511A
CN114693511A CN202110236949.1A CN202110236949A CN114693511A CN 114693511 A CN114693511 A CN 114693511A CN 202110236949 A CN202110236949 A CN 202110236949A CN 114693511 A CN114693511 A CN 114693511A
Authority
CN
China
Prior art keywords
picture
electronic device
completion
resource
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110236949.1A
Other languages
Chinese (zh)
Inventor
卞超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN114693511A publication Critical patent/CN114693511A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5862Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a picture completion method and electronic equipment, wherein the method comprises the following steps: the method comprises the steps that first electronic equipment obtains a first picture; the first electronic equipment determines a first area where a first designated target in the first picture is located; the first electronic equipment cuts the image in the first area; and the first electronic equipment replaces the image in the first area with the first supplement resource and pastes the first supplement resource back to the first area to obtain a second picture, wherein the first supplement resource is a picture other than the first picture. Therefore, the electronic equipment can beautify the picture acquired by the first electronic equipment according to the real picture resources, and the visual effect of the picture acquired by the first electronic equipment is improved.

Description

Picture completion method and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a picture completion method and an electronic device.
Background
With the performance of electronic equipment becoming better and better, the capability that the camera can provide becomes stronger and the requirement for the use effect of the camera becomes higher and higher.
In the normal shooting process, due to natural shake of a photographer or various reasons such as capturing due to incomplete motion and focusing of a shot object, a shot picture is blurred or a shooting effect of a user is unsatisfactory, the user needs to shoot again, and shooting experience of the user is affected.
At present, image processing such as exposure adding and voice point denoising can be performed on an image acquired by a camera after shooting is completed to improve the definition of the image, or image processing algorithms are adopted to perform processing such as 'big eye' and 'thin leg' on the image. Due to the fact that the definition of the picture is not high, picture distortion can be caused when the picture is subjected to over-processing, and the effect that a user is satisfied cannot be achieved.
Disclosure of Invention
The application provides a picture completion method and electronic equipment, which are used for realizing the completion processing of the picture by the electronic equipment according to real high-definition picture resources and improving the visual effect of the picture.
In a first aspect, the present application provides a picture completion method, including: the method comprises the steps that first electronic equipment obtains a first picture; the first electronic equipment determines a first area where a first designated target in the first picture is located; and the first electronic equipment modifies the image in the first area according to the first completion resource to obtain a second picture, wherein the first completion resource is a picture except the first picture. Therefore, the electronic equipment can beautify the picture acquired by the first electronic equipment according to the real picture resources, and the visual effect of the picture acquired by the first electronic equipment is improved.
The first picture may be a picture acquired by a camera of the first electronic device in real time, a picture in a gallery, a picture in file management, a picture in a server, a picture sent to the first electronic device by the second electronic device, and the like, and the first picture may also be an image frame in a video.
In some embodiments, the first electronic device may use, as a completion region (first region), regions where all the designated targets identified in the first picture are located; in other embodiments, the first electronic device may use, as a completion region (first region), a region where designated targets with highest category priorities of all designated targets identified in the first picture are located; in other embodiments, the first electronic device may receive a selection operation of the user on the first picture to determine a complementary region (first region) in the first picture, where the selection operation may be a click operation or a slide operation.
The first electronic device may modify the image in the first area according to the first completion resource in any one of the following manners;
the first method is as follows: the first electronic device cuts the image in the first area in the first picture, replaces the image in the first area with the first completion resource, and then places the first completion resource in the first area to obtain the second picture. And the central point of the first completion resource in the second picture is superposed with the central point of the image in the first area in the first picture before cutting.
The second method comprises the following steps: the first electronic device does not need to cut the image in the first area in the first picture, and the first electronic device directly covers the first completion resource on the image in the first area to obtain the second picture. Wherein a center point of the first completion resource coincides with a center point of the image in the first region.
The third method comprises the following steps: the first electronic device does not need to cut the image in the first area in the first picture, and the first electronic device fuses the features of the first completion resource with the image features in the first area to obtain the second picture.
The method may also be used to improve the beautification of the video captured by the first electronic device. Video is also made up of pictures from frame to frame. That is, the beautifying of the video by the first electronic device is actually the beautifying of each frame of picture constituting the video, and the method for beautifying each frame of picture in the video frame by the first electronic device is similar to the method for beautifying the first picture by the first electronic device provided by the present application.
With reference to the first aspect, in a possible implementation manner, before the first electronic device determines a first area where a first specified target is located in the first picture, the method further includes: the method comprises the steps that first electronic equipment receives first operation of a user for a first area; the first electronic device determines a first area where a first designated target in a first picture is located, and the method specifically includes: in response to the first operation, the first electronic device determines a first area of the first designated target in the first picture. The first operation may be a click operation, a slide operation, or the like. Therefore, the first electronic device can determine the area which the user wants to complement according to the first operation of the user, and the user experience is improved.
In some embodiments, the first electronic device also identifies a second region in the first picture where the second designated target is located; the first electronic device modifies the image in the first region according to the first completion resource to obtain a second picture, and the method specifically includes: the first electronic equipment determines that the category priority of the first designated target is higher than that of the second designated target, and modifies the image in the first area according to the first completion resource to obtain a second picture. In this way, the first electronic device can recognize a plurality of completed regions in the first picture, but the first electronic device completes only the region (for example, the first region) in the first picture where the designated target with the highest category priority is located, so that the diversity of picture completion is improved.
With reference to the first aspect, in a possible implementation manner, before the first electronic device modifies the image in the first area according to the first completion resource, the method further includes: the first electronic device displays a first user interface, wherein the first user interface comprises a first completion resource; the first electronic device receives a second operation aiming at the first completion resource; the first electronic device modifies the image in the first region according to the first completion resource, and specifically includes: in response to the second operation, the first electronic device modifies the image in the first region according to the first completion resource. The category of the first complementing resource may be the same as or different from the category of the specified object in the first area. Therefore, the electronic equipment can select the first completion resource which is interested by the user according to the second operation of the user, and the user experience is improved.
With reference to the first aspect, in a possible implementation manner, after the first electronic device determines a first area where the first designated target is located in the first picture, the method further includes: the first electronic device determines a first completion resource with the feature similarity greater than a preset value of the first designated target from a resource library according to the feature of the first designated target, wherein the resource library is any one of the following: the picture of the first electronic device is stored locally, the picture of the second electronic device is stored locally, and the picture in the server is stored locally.
In some embodiments, the first electronic device obtains a first date of a third picture in the repository, where the first date of the third picture is a date when the third picture is saved in the repository; and when the difference value between the first date and the appointed date is greater than the preset value, the first electronic equipment removes the third picture from the resource library. In this way, the first electronic device may remove the third picture whose saving time exceeds the specified time from the repository according to the time for saving the third picture in the repository, so as to update the repository. In this way, pictures in the resource library can be made closer to the current behavior characteristics of the user.
In some embodiments, the first electronic device further needs to determine the complementary resource. Specifically, the first electronic device acquires image parameters of a plurality of pictures; the first electronic device determines a completion resource from the plurality of pictures based on image parameters of the plurality of pictures, wherein the image parameters include one or more of exposure, sharpness, color values, quality-sensitive values, noise values, anti-shake values, flash values, and artifact values.
In an optional implementation manner, the first electronic device scores the multiple pictures according to the image parameters of the multiple pictures to obtain a picture with the highest score in the multiple pictures as a completion resource.
In another optional implementation manner, the first electronic device scores the multiple pictures according to one or more preset-dimension images of the multiple pictures to obtain one or more pictures with the highest scores of the multiple pictures in one or more preset dimensions, and the pictures are used as completion resources.
With reference to the first aspect, in a possible implementation manner, the modifying, by the first electronic device, the image in the first area according to the first completion resource to obtain the second picture specifically includes: the first electronic equipment cuts the image in the first area; the first electronic device replaces the image in the first area with the first completed resource and posts the first completed resource back to the first area. Therefore, the first electronic equipment replaces the image in the first area according to the real completion resource, and the visual effect of the image in the first area is improved. The first electronic device replaces the image in the first region with the first completion resource and pastes the first completion resource back to the first region means that the first electronic device crops the image in the first region in the first picture and replaces the image in the first region in the first picture with the first completion resource, and then the electronic device 100 places the first completion resource in the first region in the first picture to obtain the second picture, wherein the second picture does not include the image in the first region. And the central point of the first completion resource in the second picture is superposed with the central point of the image in the first area in the first picture before cutting.
With reference to the first aspect, in a possible implementation manner, before the first electronic device acquires the first picture, the method further includes: the method comprises the steps that a first electronic device displays a shooting preview interface, wherein the shooting preview interface comprises a picture, a completion control and a shooting control which are collected by a camera in real time; the first electronic equipment receives a third operation aiming at the completion control and a fourth operation aiming at the shooting control; the first electronic device acquires a first picture, and specifically includes: and responding to the third operation and the fourth operation, and acquiring a first picture from the picture acquired by the camera in real time by the first electronic equipment. Here, the first picture is an image captured by a camera of the first electronic device in real time. In a camera application scene, before a first electronic device acquires a first picture, a user needs to open a completion function (a third operation), then a shooting control receives a click operation (a fourth operation) of the user, the first electronic device acquires the first picture and determines a first region in the first picture, and the first region is a completion region.
With reference to the first aspect, in a possible implementation manner, before the first electronic device acquires the first picture, the method further includes: the first electronic equipment displays a second user interface, and the second user interface comprises a thumbnail of the first picture; the first electronic equipment receives a fifth operation of a thumbnail of the first picture; the first electronic device acquires a first picture, and specifically includes:
responding to a fifth operation, the first electronic equipment acquires a first picture; after the first electronic device acquires the first picture, the method further comprises: the first electronic equipment displays a third user interface, and the third user interface comprises a first picture and a completion control; the first electronic device receives a sixth operation aiming at the completion control; the first electronic device determines a first area where a first designated target in the first picture is located, and the method specifically includes: in response to the sixth operation, the first electronic device determines a first area of the first designated target in the first picture. For the gallery application scene, the first electronic device first receives a click operation (a fifth operation) of a user on a thumbnail of the first picture, and acquires the first picture. And then, the user opens the completion function, namely clicking operation (fourth operation) on the completion control, and the first electronic device determines a first region in the first picture, wherein the first region is a completion region.
With reference to the first aspect, in one possible implementation manner, the completion control is a two-dimensional completion control or a three-dimensional completion control. When the completion control is a two-dimensional completion control, the second picture obtained by the first electronic equipment is a plane picture; and when the completion control is the three-dimensional completion control, the second picture obtained by the first electronic equipment is a three-dimensional picture. When the completion control is a three-dimensional completion control, and after the completion control receives the click operation of the user, the method further comprises the following steps: the first electronic equipment confirms that the angle of the first specified target is a first angle; the first electronic equipment is matched with a first designated target of a second angle, a first designated target of a third angle and a first designated target of a fourth angle from the resource library according to the first designated target of the first angle; the first electronic equipment is matched with a preset three-dimensional modeling model according to the characteristics of the first specified target; the first electronic equipment obtains a fourth picture according to the first specified target at the first angle, the first specified target at the second angle, the first specified target at the third angle, the first specified target at the fourth angle and a preset three-dimensional modeling model, wherein the fourth picture is a three-dimensional image; the first electronic device modifies the image in the first region according to the first completion resource to obtain a second picture, and the method specifically includes: and replacing the image in the first area with the fourth picture by the first designated target, and pasting the fourth picture back to the first area to obtain a second picture.
With reference to the first aspect, in a possible implementation manner, after the first electronic device modifies the image in the first area according to the first completion resource to obtain the second picture, the method further includes: the first electronic equipment displays a fourth user interface, wherein the fourth user interface comprises a first picture, a second picture and a storage control; the first electronic equipment receives a seventh operation aiming at the first picture and/or the second picture; and the first electronic equipment receives and responds to the eighth operation aiming at the saving control, and the first electronic equipment saves the first picture and/or the second picture to a storage path corresponding to the gallery application. Therefore, the user can select to store any one or all of the original picture and the completed picture, and the user experience is improved.
In an optional implementation manner, the first electronic device may directly store the first picture and/or the second picture to a storage path corresponding to the gallery application. In this way, user operations are reduced.
In a second aspect, the present application provides an electronic device, which is a first electronic device, where the first electronic device includes: one or more processors, one or more memories;
one or more memories coupled to the one or more processors, the one or more memories for storing computer program code, the computer program code including computer instructions, the one or more processors invoking the computer instructions to cause the first electronic device to perform: acquiring a first picture; determining a first area where a first designated target in a first picture is located; and modifying the image in the first area according to the first completion resource to obtain a second picture, wherein the first completion resource is a picture except the first picture. Therefore, the electronic equipment can beautify the picture acquired by the first electronic equipment according to the real picture resources, and the visual effect of the picture acquired by the first electronic equipment is improved.
The first picture may be a picture acquired by a camera of the first electronic device in real time, a push picture in a gallery, a picture in file management, a picture in a server, a picture sent to the first electronic device by the second electronic device, and the like.
The first electronic device may modify the image in the first area according to the first completion resource in any one of the following manners;
the first method is as follows: the first electronic device cuts the image in the first area in the first picture, replaces the image in the first area with the first completion resource, and then places the first completion resource in the first area to obtain the second picture. And the central point of the first completion resource in the second picture is superposed with the central point of the image in the first area in the first picture before cutting.
The second method comprises the following steps: the first electronic device does not need to cut the image in the first area in the first picture, and the first electronic device directly covers the first completion resource on the image in the first area to obtain the second picture. Wherein a center point of the first completion resource coincides with a center point of the image in the first region.
The third method comprises the following steps: the first electronic device does not need to cut the image in the first area in the first picture, and the first electronic device fuses the features of the first completion resource with the image features in the first area to obtain the second picture.
The method may also be used to improve the beautification of the video captured by the first electronic device. Video is also made up of pictures from frame to frame. That is, the beautifying of the video by the first electronic device is actually the beautifying of each frame of picture constituting the video, and the method for beautifying each frame of picture in the video frame by the first electronic device is similar to the method for beautifying the first picture by the first electronic device provided by the present application.
With reference to the second aspect, in a possible implementation manner, before the first electronic device determines the first area in the first picture where the first specified target is located, the one or more processors are specifically configured to invoke the computer instructions to enable the first electronic device to perform: receiving a first operation of a user for a first area; in response to the first operation, the first electronic device determines a first area of the first designated target in the first picture. The first operation may be a click operation, a slide operation, or the like. Therefore, the first electronic device can determine the area which the user wants to complement according to the first operation of the user, and the user experience is improved.
With reference to the second aspect, in a possible implementation manner, after the first electronic device determines the first area in the first picture where the first designated target is located, the one or more processors are specifically configured to invoke the computer instruction to enable the first electronic device to execute: determining a first completion resource with the feature similarity of the first designated target greater than a preset value from a resource library according to the feature of the first designated target, wherein the resource library is any one of the following: the picture of the first electronic device is stored locally, the picture of the second electronic device is stored locally, and the picture in the server is stored locally.
In some embodiments, the first electronic device obtains a first date of a third picture in the repository, where the first date of the third picture is a date when the third picture is saved in the repository; and when the difference value between the first date and the appointed date is greater than the preset value, the first electronic equipment removes the third picture from the resource library. In this way, the first electronic device may remove the third picture whose saving time exceeds the specified time from the repository according to the time for saving the third picture in the repository, so as to update the repository. In this way, pictures in the resource library can be made closer to the current behavior characteristics of the user.
In some embodiments, the first electronic device further needs to determine the complementary resource. Specifically, the first electronic device acquires image parameters of a plurality of pictures; the first electronic device determines a completion resource from the plurality of pictures based on image parameters of the plurality of pictures, wherein the image parameters include one or more of exposure, sharpness, color values, quality-sensitive values, noise values, anti-shake values, flash values, and artifact values.
In an optional implementation manner, the first electronic device scores the multiple pictures according to the image parameters of the multiple pictures to obtain a picture with the highest score in the multiple pictures as a completion resource.
In another optional implementation manner, the first electronic device scores the multiple pictures according to one or more preset-dimension images of the multiple pictures to obtain one or more pictures with the highest scores of the multiple pictures in one or more preset dimensions, and the pictures are used as completion resources.
With reference to the second aspect, in one possible implementation manner, the one or more processors are specifically configured to invoke the computer instructions to cause the first electronic device to perform: cropping the image in the first region; and replacing the image in the first area with the first complementing resource, and pasting the first complementing resource back to the first area. Therefore, the first electronic equipment replaces the image in the first area according to the real completion resource, and the visual effect of the image in the first area is improved. The first electronic device replaces the image in the first region with the first completion resource and pastes the first completion resource back to the first region means that the first electronic device crops the image in the first region in the first picture and replaces the image in the first region in the first picture with the first completion resource, and then the electronic device 100 places the first completion resource in the first region in the first picture to obtain the second picture, wherein the second picture does not include the image in the first region. And the central point of the first completion resource in the second picture is superposed with the central point of the image in the first area in the first picture before cutting.
With reference to the second aspect, in a possible implementation manner, the first electronic device further includes a camera; before the first electronic device acquires the first picture, the one or more processors are specifically configured to invoke the computer instructions to cause the first electronic device to perform: displaying a shooting preview interface, wherein the shooting preview interface comprises a picture, a completion control and a shooting control which are acquired by a camera in real time; receiving a third operation aiming at the completion control and a fourth operation aiming at the shooting control; and responding to the third operation and the fourth operation, and acquiring a first picture from the picture acquired by the camera in real time. Here, the first picture is an image captured by a camera of the first electronic device in real time. In a camera application scene, before a first electronic device acquires a first picture, a user needs to open a completion function (a third operation), then a shooting control receives a click operation (a fourth operation) of the user, the first electronic device acquires the first picture and determines a first region in the first picture, and the first region is a completion region.
With reference to the second aspect, in a possible implementation manner, before the first electronic device acquires the first picture, the one or more processors are specifically configured to invoke the computer instructions to cause the first electronic device to perform: displaying a second user interface, wherein the second user interface comprises a thumbnail of the first picture; receiving a fifth operation for a thumbnail of the first picture; responding to the fifth operation, and acquiring a first picture; after the first electronic device acquires the first picture, the one or more processors are specifically configured to invoke the computer instructions to cause the first electronic device to perform: displaying a third user interface, wherein the third user interface comprises a first picture and a completion control; receiving a sixth operation for the completion control; and responding to the sixth operation, and determining a first area where the first specified target in the first picture is located. For the gallery application scene, the first electronic device first receives a click operation (a fifth operation) of a user on a thumbnail of the first picture, and acquires the first picture. And then, the user opens the completion function, namely clicking operation (fourth operation) on the completion control, and the first electronic device determines a first region in the first picture, wherein the first region is a completion region.
With reference to the second aspect, in one possible implementation manner, the completion control is a two-dimensional completion control or a three-dimensional completion control. When the completion control is a two-dimensional completion control, the second picture obtained by the first electronic equipment is a plane picture; and when the completion control is the three-dimensional completion control, the second picture obtained by the first electronic equipment is a three-dimensional picture. When the completion control is a three-dimensional completion control, and after the completion control receives the click operation of the user, the method further comprises the following steps: the first electronic equipment confirms that the angle of the first specified target is a first angle; the first electronic equipment is matched with a first designated target of a second angle, a first designated target of a third angle and a first designated target of a fourth angle from the resource library according to the first designated target of the first angle; the first electronic equipment is matched with a preset three-dimensional modeling model according to the characteristics of the first specified target; the first electronic equipment obtains a fourth picture according to the first specified target at the first angle, the first specified target at the second angle, the first specified target at the third angle, the first specified target at the fourth angle and a preset three-dimensional modeling model, wherein the fourth picture is a three-dimensional image; the first electronic device modifies the image in the first region according to the first completion resource to obtain a second picture, and the method specifically includes: and replacing the image in the first area with the fourth picture by the first designated target, and pasting the fourth picture back to the first area to obtain a second picture.
With reference to the second aspect, in a possible implementation manner, after the first electronic device modifies the image in the first area according to the first completion resource to obtain the second picture, the one or more processors are specifically configured to invoke the computer instructions to enable the first electronic device to execute: displaying a fourth user interface, wherein the fourth user interface comprises a first picture, a second picture and a storage control; receiving a seventh operation aiming at the first picture and/or the second picture; and receiving and responding to the eighth operation aiming at the saving control, and saving the first picture and/or the second picture to a storage path corresponding to the gallery application. Therefore, the user can select to store any one or all of the original picture and the completed picture, and the user experience is improved.
In an optional implementation manner, the first electronic device may directly store the first picture and/or the second picture to a storage path corresponding to the gallery application. In this way, user operations are reduced.
In a third aspect, the present application provides a readable storage medium, which stores computer instructions, and when the computer instructions are executed on a first electronic device, the first electronic device is caused to perform a picture completion method in any one of the foregoing possible implementation manners.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a software structure of an electronic device 200 according to an embodiment of the present disclosure;
fig. 3 is a system architecture diagram of picture completion according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a principle of objective evaluation of image quality according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a plurality of completion resources obtained by an intelligent recommendation module according to a preset dimension score according to an embodiment of the present application;
6A-6D are a set of UI diagrams provided by embodiments of the application;
FIGS. 7A-7G are another set of UI diagrams provided by embodiments of the present application;
8A-8M are UI diagrams of picture completion in a set of camera application scenarios provided by embodiments of the present application;
8N-8Q are UI diagrams of picture completion in another set of camera application scenarios provided by embodiments of the present application;
9A-9D are UI diagrams of picture completion in yet another set of camera application scenarios provided by embodiments of the present application;
10A-10D are UI diagrams of picture completion in a set of gallery application scenarios provided by embodiments of the present application;
10E-10K are UI diagrams of picture completion in another set of gallery application scenarios provided by embodiments of the present application;
11A-11C are UI diagrams of picture completion in another set of camera application scenarios provided by an embodiment of the present application;
12A-12F are UI diagrams of picture completion in a set of file management application scenarios provided by embodiments of the present application;
13A-13F are UI diagrams of picture completion in a set of Internet application scenarios provided by embodiments of the present application;
fig. 14 is a schematic flowchart of a picture completing method according to an embodiment of the present application;
FIG. 14A is a schematic diagram of another gallery user interface provided in embodiments of the present application;
fig. 15 is a flowchart illustrating another picture completing method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Fig. 1 shows a schematic structural diagram of an electronic device 100.
The following describes an embodiment specifically by taking the electronic device 100 as an example. It should be understood that the electronic device 100 shown in fig. 1 is merely an example, and that the electronic device 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to finish the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. Processor 110 and display screen 194 communicate via a DSI interface to implement display functions of electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in the external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a variety of types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device 100 may utilize the distance sensor 180F to range to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocking and locking the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation acting thereon or nearby. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M can acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software structure of an electronic device 200 according to an embodiment of the present disclosure. The layered architecture can divide the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, which are, from top to bottom, an application layer (abbreviated as an application layer), an application framework layer (abbreviated as a framework layer), a Hardware Abstraction Layer (HAL) layer, and a Kernel layer (also referred to as a driver layer).
Wherein the Application layer (Application) may comprise a series of Application packages. The application layer may include a plurality of application packages. The plurality of application packages may be camera, gallery, calendar, phone, map, navigation, WLAN, bluetooth, music, video, short message, desktop launch (Launcher) and other applications. For example, as shown in fig. 2, the application layer may include a camera system application (also referred to as a camera application).
As shown in fig. 2, the camera system application may be configured to display an image frame reported in an underlying layer on a viewing interface of the camera system application in a photo taking mode. In the embodiment of the application, the user can start the intelligent completion function before shooting the picture, the camera system application starts shooting to obtain a picture, the camera system application performs completion processing on the picture collected by the camera according to real picture resources in the resource library, beautifies the picture, and improves the visual effect of the picture. Specifically, the camera system uses the default region of the whole picture as a completion region, that is, all the identified regions of the designated target in the picture need to be completed. Alternatively, the camera system application may determine the completion area from the user's sliding trajectory in the picture. Alternatively, the camera system application may receive a setting of the user to use, as the completion area, an area where a designated object with a highest category priority among the designated objects identified in the picture is located, and the priority of the designated object category may be set by the user, for example, the person priority is greater than the animal priority and greater than the plant priority, and the like. After the completion area is determined, the feature generation module performs feature extraction on the specified target of the completion area in the picture to obtain the feature of the specified target, and the feature value of the specified target can be represented by a feature vector. The feature vector may represent color features, texture features, contour features, and other features of the target. And the characteristic generating module sends the characteristics of the specified target in the completion area to an intelligent searching module in the application of the gallery system.
The intelligent searching module receives the characteristics of the specified target in the completion area sent by the characteristic generating module, and searches the preselected image resources with the characteristic similarity of the specified target in the completion area larger than the threshold value from the resource library according to the characteristics of the specified target in the completion area. The repository may be a local picture (i.e., a picture stored in a gallery) or a cloud picture or a picture in the internet or a picture acquirable by the interconnected electronic devices, and the picture acquirable by the interconnected electronic devices may include a local picture of the interconnected electronic devices, a cloud picture of the interconnected electronic devices, or a network picture acquired by the interconnected electronic devices, and the like. And the intelligent searching module sends the preselected picture resources to the intelligent recommending module.
And the intelligent recommending module receives the preselected picture resources sent by the intelligent searching module. And performing quality evaluation on the preselected picture resources to obtain a preselected picture with the highest average score of the quality scores of multiple dimensions in the preselected pictures as a completion resource, or obtaining multiple pictures with the highest quality scores of different dimensions in the preselected pictures as the completion resource respectively. And the intelligent recommendation module sends the completion resources to an intelligent completion image processing module in the camera system application.
And the intelligent completion image processing module receives the completion resources sent by the intelligent recommendation module. The intelligent completion image processing module performs two-dimensional beautification processing on the picture acquired by the camera according to a two-dimensional beautification picture algorithm, namely the intelligent completion image processing module pastes the completion resources back to the area where the specified target is located in the picture acquired by the camera to obtain the high-definition picture. Or, the intelligent completion image processing module performs three-dimensional modeling processing on the pictures acquired by the camera according to a three-dimensional modeling picture algorithm, that is, the intelligent completion image processing module obtains pictures with other multiple dimensions (such as the back, the left and the right) according to the completion resources of the current dimension (such as the front), and performs three-dimensional modeling according to a preset three-dimensional model to obtain the high-definition pictures.
How the intelligent completion image processing module performs two-dimensional beautification processing and three-dimensional modeling processing on the picture acquired by the camera will be described in detail in the following embodiments.
The Framework layer (Framework) provides an Application Programming Interface (API) and a programming Framework for applications at the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 2, the framework layer may provide Camera APIs such as Camera API (API 1| API 2), Camera Service (Camera Service), Camera extension Service (Camera Service Extra), and hardware development kit (Hw SDK).
Wherein, the Camera API (API 1| API 2) serves as an interface for the bottom layer (such as a hardware abstraction layer) to interact with the application layer. Specifically, the Camera API (API 1| API 2) may receive a notification from an upper layer (e.g., an application layer) to start taking a picture, and may start the Camera through the Camera service, the Camera extension service, and the HwSDK process.
The HAL layer serves to connect the frame layer and the core layer. For example, the HAL layer may provide data pass-through between the framework layer and the kernel layer. Of course, the HAL layer may also process data from the underlying layer (i.e., the kernel layer) and then transmit to the framework layer. For example, the HAL layer may translate parameters of the kernel layer regarding the hardware device into software programming languages recognizable by the framework layer and the application layer. For example, the HAL layer may include HAL3.0, three-dimensional modeling picture algorithms, and two-dimensional beautification picture algorithms.
It should be noted that the three-dimensional modeling algorithm and the two-dimensional beautification algorithm may also be located in an application layer, and the storage locations of the three-dimensional modeling algorithm and the two-dimensional beautification algorithm are not limited in the present application.
The kernel layer includes a Camera Driver (Driver), an image signal processor ISP, and Camera devices. The Camera device may include a Camera including a Camera lens, an image sensor, and the like. The image signal processor ISP may be provided separately from the Camera (e.g., Camera device). In other embodiments, the image signal processor ISP may be provided in a Camera (e.g., a Camera device).
Among them, the image signal processor ISP and Camera devices are main devices for taking pictures. The optical signal reflected by the viewing environment is irradiated on the image sensor through the camera lens and converted into an electric signal, and the electric signal is processed by the image signal processor ISP and can be used as an original parameter stream and is driven by the camera to be transmitted to an upper layer. Moreover, the Camera driver may also receive a notification (e.g., a notification indicating to turn on or turn off the Camera) from an upper layer, and send a function processing parameter stream to the Camera device according to the notification to turn on or turn off the corresponding Camera.
The following embodiments of the present application provide a picture completion method, including: the electronic device 100 (first electronic device) acquires the first picture, and the electronic device 100 identifies a region (first region) in the first picture where the designated target is located according to a designated target identification model or algorithm, and extracts features of the designated target. The electronic device 100 finds the first completion resource matching the characteristic of the specified target from the resource library based on the characteristic of the specified target. Then, the electronic device 100 modifies the image in the first area according to the first completion resource to obtain a second picture. In this way, the electronic device 100 can beautify the picture acquired by the electronic device 100 according to the real picture resource, and the visual effect of the picture acquired by the electronic device 100 is improved.
The first picture may be a picture acquired by a camera of the electronic device 100 in real time, or a picture in a gallery of the electronic device 100, or a picture in file management, or a picture sent to the electronic device 100 by another electronic device, or a picture in the internet, or the like.
The electronic device 100 may modify the image in the first region according to the first completion resource in any one of the following manners;
the first method is as follows: the electronic device 100 cuts the image in the first area in the first picture, replaces the image in the first area with the first completion resource, and then the electronic device 100 places the first completion resource in the first area in the first picture to obtain the second picture. And the center point of the first completion resource in the second picture is superposed with the center point of the image in the first area in the first picture before cutting.
The second method comprises the following steps: the electronic device 100 does not need to crop the image in the first area in the first picture, and the electronic device 100 directly covers the first completion resource on the image in the first area to obtain the second picture. Wherein a center point of the first completion resource coincides with a center point of the image in the first region.
The third method comprises the following steps: the electronic device 100 does not need to crop the image in the first region in the first picture, and the electronic device 100 fuses the feature of the first completion resource and the image feature in the first region to obtain the second picture.
The electronic device 100 may also modify the image in the first area according to the first completion resource in other manners, which is not limited herein.
The electronic device 100 may be preset with a recognition algorithm or a model for some specific targets, and recognize the type of the specific target in the image and the position of the specific target in the image. By way of example, the designated target may include, but is not limited to, a character, an animal, a car, a moon, a flower, a bowl, a cup, a house, and the like.
The repository may be a picture stored in a gallery of the electronic device 100, or a cloud picture of the electronic device 100, or a picture in the internet, or a picture acquirable by the interconnected electronic devices, where the picture acquirable by the interconnected electronic devices may include a local picture of the interconnected electronic devices (second electronic devices), a cloud picture of the interconnected electronic devices, or a network picture acquired by the interconnected electronic devices, and the like, and the repository is not limited in the present application.
The method may also be used to improve the beautification of video captured by the electronic device 100. Video is also made up of pictures from frame to frame. That is, the beautifying of the video by the electronic device 100 is actually the beautifying of each frame of picture constituting the video, and the method for beautifying each frame of picture in the video frame by the electronic device 100 is the same as the method for beautifying the first picture by the electronic device 100 provided in this application, and is not described herein again.
Fig. 3 illustrates a system architecture diagram for picture completion.
As shown in fig. 3, when the camera application is used for photographing, before the photographing is completed, the camera application accepts the operation of the user to start the intelligent completion function, and after the photographing is completed, the low-definition image is acquired through the camera.
The completion area setting module determines an area needing completion in the first picture. The completion area setting module may determine an area to be completed in the first picture by any one of the following methods.
In a possible implementation manner, the completion area setting module takes all the areas of the designated target identified in the first picture as the completion areas. Specifically, the completion area setting module acquires a first picture, and identifies one or more designated objects and an area where the one or more designated objects are located in the first picture by using a designated object identification model or algorithm. The electronic device 100 automatically recognizes the area in which one or more designated targets are located as a completion area.
In another possible implementation manner, the completion area setting module acquires a first picture, and identifies the category of one or more designated objects and the area where the one or more designated objects are located in the first picture by using a designated object identification model. The completion area setting module only takes the area where the designated target with the highest category priority level is identified in all designated targets as a completion area according to the preset category priority level of the designated target.
For example, the category priority of the designated target preset in the completion area setting module is, from top to bottom, a person, an animal, a tree, a house, a flower, a moon, a bag, a car, a bowl, and the like. When the one or more designated objects automatically identified by the completion area setting module from the first picture are characters, animals, flowers, houses, bags, trees, and the like. However, the completion region setting module only takes the region where the designated object with the highest category priority level among all the designated objects is identified as the completion region, and according to the priority level of the designated object preset by the completion region setting module, the completion region setting module only takes the region where the person automatically identified in the first picture is located in the first picture as the completion region.
In another possible implementation manner, the completion area setting module may receive a selection operation of a user in the first picture, and determine the completion area according to the selection operation of the user.
In some embodiments, the selection operation may be a sliding operation, that is, the completion area setting module receives a sliding operation of a user in the first picture, and determines the completion area of the first picture according to the sliding track. In this way, in some embodiments, when the completion area setting module cannot identify the area where the designated target is located in the first picture, the completion area setting module may determine the completion area according to the sliding track of the user.
Illustratively, when there are people, animals and houses, trees, etc. in the first picture, the user only wants to complete the animals in the first picture. The completion area setting module can receive sliding operation of a user in the first picture to circle the area of the animal in the first picture. And the completion area setting module takes the area of the animal in the first picture defined according to the sliding track of the user as a completion area.
Illustratively, the first picture includes a character including a head, a body, arms, and legs. When the user only wants to replace the leg in the first picture, the electronic device 100 may receive a sliding operation of the user in an area where the leg of the person in the first picture is located, and the electronic device 100 takes the area where the leg of the person is located as a supplement area.
In other embodiments, the selection operation may also be a click operation, that is, the completion area setting module receives a click operation of a user in the first picture, and confirms the completion area of the first picture.
Specifically, the completion area setting module may automatically identify an area in which the designated target is located in the first picture, and then, the completion area setting module may receive a click operation of the user on the first picture, and when the clicked position coordinate is in the area in which the designated target is located, in response to the click operation, the completion area setting module takes the area in which the designated target including the clicked position coordinate is located as the completion area.
It should be noted that the selection operation is not limited to the sliding operation and the clicking operation, and may be other operations, which is not limited herein.
And then, the feature generation module performs feature extraction on the specified target in the compensation area of the first picture.
In a possible implementation manner, after the completion area setting module identifies one or more designated objects and an area where the one or more designated objects are located in the first picture by using a designated object identification model or algorithm, the feature generation module will automatically extract features of the one or more designated objects in the first picture. The one or more target-specific features may be texture features, contour features, color features, and the like.
In another possible implementation manner, the completion region setting module only uses, as a completion region, a region where a designated target with the highest category priority among the one or more designated targets that are automatically identified is located according to the category priority of the preset designated target, and the feature generation module automatically extracts the feature of the designated target in the first picture completion region. The features of the designated object may be texture features, contour features, color features, and the like.
In another possible implementation manner, the completion area setting module determines the completion area according to the sliding track of the user. The feature generation module performs feature extraction on the specified target in the completion area by adopting a preset model or algorithm to obtain the feature of the specified target in the first picture completion area. The features of the designated object may be texture features, contour features, color features, and the like.
The feature generation module sends the feature of the specified target in the supplement region of the first picture to the intelligent search module.
The intelligent searching module matches the completion resources from a resource library (preselected picture resources) according to the characteristics of the specified target.
The preselected picture resource may be a plurality of pictures stored locally by the electronic device 100, a plurality of pictures stored in a cloud server, a plurality of pictures in the internet, or a plurality of pictures that can be acquired by other electronic devices that establish communication connection with the electronic device 100.
Specifically, when the preselected picture resource is a plurality of locally stored pictures, the electronic device 100 searches for a completion resource having a feature similarity greater than a preset value with respect to the specified target from the plurality of locally stored pictures according to the feature of the specified target.
In order to ensure timeliness of the completed resource, the electronic device 100 may screen a plurality of locally stored pictures, and delete the pictures with the difference value between the creation date and the specified date of the plurality of locally stored pictures and being greater than the preset value, so as to obtain a picture set. The electronic device 100 searches for a completion resource in the picture set, where the feature similarity with the specified target is greater than a preset value. Therefore, timeliness of resource completion is guaranteed, and current behavior characteristics of the user are better met.
When the preselected picture resource is a picture stored in the cloud server, the preselected picture resource may be a picture stored in the cloud server of the electronic device 100. The electronic device 100 establishes communication connection with the cloud server, and the intelligent search module sends the characteristics of the specified target in the first picture to the cloud server. And the cloud server searches the completed resource with the characteristic similarity greater than the preset value with the specified target from the picture stored in the cloud server. And the cloud server sends the completion resources matched with the characteristics of the specified target to the intelligent searching module.
When the pre-selected picture resource is a picture saved in another electronic device (e.g., the electronic device 200) that establishes a communication connection with the electronic device 100 by bluetooth or the like. The electronic device 100 establishes a communication connection with the electronic device 200, and the intelligent search module sends the feature of the specified target in the first picture to the electronic device 200. The electronic device 200 searches the completion resource matched with the feature of the specified target from the picture stored in the electronic device 200, or the electronic device 200 searches the completion resource matched with the feature of the specified target from the cloud picture of the electronic device 200, or the electronic device 100 establishes communication connection with the internet through wireless communication technologies such as a 4G network, a 5G network, or a Wireless Local Area Network (WLAN). The electronic device 200 sends the features of the specified target in the first picture to a search engine (e.g., hundredths degrees). The search engine finds the completed resources matching the characteristics of the specified target from the pictures in the database. The search engine sends the completion resources matched with the characteristics of the specified target to the electronic device 200, and the electronic device 200 sends the completion resources matched with the characteristics of the specified target to the intelligent search module.
When the preselected picture resource is a picture in the internet, that is, the electronic device 100 establishes a communication connection with the internet through a wireless communication technology such as a 4G network, a 5G network, or a Wireless Local Area Network (WLAN). The intelligent lookup module sends the target-specified features in the first picture to a search engine (e.g., hundredths). The search engine finds the completed resources matching the characteristics of the specified target from the pictures in the database. And the search engine sends the completion resources matched with the characteristics of the specified target to the intelligent searching module.
After the intelligent searching module searches the completion resources matched with the characteristics of the specified target from the preselected picture resources, the intelligent searching module sends the completion resources to the intelligent recommending module.
The intelligent recommendation module performs quality scoring on the completion resources to obtain one completion resource with the highest average score of the quality scores of the dimensions in the completion resources as a first completion resource, or obtains a plurality of pictures with the highest quality scores of different dimensions in the completion resources as the first completion resource respectively.
In some embodiments, the repository includes, in addition to the picture, shooting parameter information of the picture, the shooting parameter information including one or more of a shooting date, a resolution, a pixel value, a picture type, a color value, an aperture value, an exposure time, an exposure compensation, a focal length, a floodlight mode, a brightness, a white balance, a sharpness, and the like.
The following embodiments of the present application take the pre-selected picture resource from the pictures stored in the gallery of the electronic device 100 as an example.
The electronic device 100 may score the quality of the preselected picture resource in any one of the following ways.
The first method is as follows:
objective evaluation of image quality
The objective evaluation of the image quality is that the intelligent recommending module determines a completion resource from a preselected picture resource based on one or more image parameters of an exposure value, a definition value, a color value, a quality-sensing value, a noise value, an anti-shake value, a focus value, an artifact value and the like.
Specifically, the intelligent recommendation module can evaluate the quality of the preselected image resources from one or more image parameters of exposure, definition, color values, quality-sensitive values, noise values, anti-shake values, flash values, artifact values and the like according to quality scoring models or algorithms with different dimensions to obtain quality scores of the preselected image resources, and determine the completion resources according to the quality scores of the preselected image resources.
In a possible implementation manner, the intelligent recommendation module takes an average value of the multiple dimension values as a quality score of the multiple locally stored pictures. The intelligent recommendation module takes one picture with the highest quality score in the plurality of locally stored pictures as a completion resource.
Fig. 4 schematically illustrates the principle of objective evaluation of image quality.
The intelligent recommendation module calculates an exposure value of a preselected picture resource according to one or more parameters in the exposure accuracy and the exposure range by adopting an exposure scoring model or algorithm; the higher the exposure accuracy and the higher the exposure range, the higher the exposure value.
The intelligent recommendation module calculates an exposure value of a preselected picture resource according to one or more parameters in the exposure accuracy and the exposure range by adopting an exposure scoring model or algorithm; the higher the exposure accuracy and the higher the exposure range, the higher the exposure value.
The intelligent recommendation module calculates the definition of the preselected picture resource according to one or more parameters of visual resolution, Modulation Transfer Function (MTF) and Spatial Frequency Response (SFR) by adopting a definition scoring model or algorithm; the higher the visual resolution, the higher the MTF, the higher the SFR, the higher the sharpness.
The intelligent recommendation module calculates the color value of the preselected picture resource according to one or more parameters of white balance, color reduction and color unevenness by adopting a color scoring model or algorithm; the higher the white balance, the higher the color rendition, the lower the color unevenness, and the higher the color value.
The intelligent recommendation module calculates an exposure value of a preselected picture resource according to one or more parameters of sharpness and sharpness loss by adopting a texture grading model or algorithm; the higher the sharpness, the lower the sharpness loss, and the higher the texture value.
The intelligent recommendation module calculates the noise value of the preselected picture resource according to one or more parameters of spatial noise, time domain noise, color noise, gray noise, signal-to-noise ratio and dynamic range by adopting a noise scoring model or algorithm.
The intelligent recommendation module calculates the noise value of the preselected picture resource according to one or more parameters of spatial noise, time domain noise, color noise, gray noise, signal-to-noise ratio and dynamic range by adopting a noise scoring model or algorithm.
The intelligent recommendation module calculates a flash value of the preselected picture resource according to one or more parameters of center contact ratio, uniform illumination, color reduction and white balance by adopting a flash scoring model or algorithm.
The intelligent recommendation module calculates a focus value of the preselected picture resource according to one or more parameters of the focus repetition speed and the focus speed by adopting a focus scoring model or algorithm.
The intelligent recommendation module employs an artifact scoring model or algorithm to calculate artifact values for the preselected picture resource based on one or more parameters of ringing, color differences, distortion, and vignetting.
The intelligent recommendation module calculates the average value of one or more image parameters in an exposure value, a definition value, a color value, a quality sensitivity value, a noise value, an anti-shake value, a focusing value, an artifact value and the like of the preselected picture resource to obtain the quality average score of the preselected picture resource. And then, the intelligent recommendation module takes the picture with the highest average quality in the pre-selected picture resources as a first completion resource.
In another possible implementation manner, the intelligent recommendation module may also score the preselected picture resources according to the quality scoring models or algorithms of different dimensions, so as to obtain scores of the preselected picture resources in the different dimensions. The electronic device 100 obtains one completion resource with the highest quality score for each dimension. In this way, the electronic device 100 can obtain a completion resource with the highest quality score for each dimension, and the diversity of picture completion is improved.
FIG. 5 is a schematic diagram illustrating a plurality of completion resources obtained by the intelligent recommendation module according to the preset dimension score.
For example, the preset dimension score may be a color value, a sharpness, a quality perception value, an anti-shake value, and the like.
The intelligent recommendation module calculates the average value of scores of the average value of one or more image parameters in the exposure value, the definition value, the color value, the quality sensation value, the noise value, the anti-shake value, the focusing value, the artifact value and the like of a plurality of locally stored pictures to obtain the average quality score of the plurality of locally stored pictures, and takes the picture with the highest average quality score in the plurality of locally stored pictures as the completion resource 1.
The intelligent recommendation module calculates color values of the plurality of locally stored pictures according to an existing color scoring model or algorithm, and takes the picture with the highest color value in the plurality of locally stored pictures as a completion resource 2.
The intelligent recommendation module calculates the definition of a plurality of locally stored pictures according to an existing definition scoring model or algorithm, and takes the picture with the highest definition in the preselected picture resources as the completion resource 3.
The intelligent recommending module calculates the quality-sensing values of a plurality of locally stored pictures according to the existing quality-sensing scoring model or algorithm, and the intelligent recommending module takes the picture with the highest quality-sensing value in the preselected picture resources as the completion resource 4.
The intelligent recommendation module calculates anti-shake values of a plurality of locally stored pictures according to an existing anti-shake scoring model or algorithm, and takes the picture with the highest anti-shake value in the pre-selected picture resources as the complementary resource 5.
In an optional implementation manner, the intelligent recommendation module returns the completion resource 1, the completion resource 2, the completion resource 3, the completion resource 4 and the completion resource 5 as completion resources to the intelligent completion image processing module, and the intelligent completion image processing module respectively replaces an image in a completion area in the first picture with the completion resource 1, the completion resource 2, the completion resource 3, the completion resource 4 and the completion resource 5 as first completion resources, and pastes back to an area where a specified target is located in the first picture to obtain 5 high-definition pictures respectively. The user may choose to save any one or more of the 5 high definition pictures.
In another optional implementation manner, the intelligent recommendation module displays the completed resource 1, the completed resource 2, the completed resource 3, the completed resource 4, and the completed resource 5 on a user interface of the electronic device 100, and the electronic device 100 may receive a selection operation of a user to select one or more completed resources (for example, the completed resource 2) in the user interface as the first completed resource to complete the completed region in the first picture.
The second method comprises the following steps:
subjective evaluation of picture quality
The subjective evaluation of the picture quality only relates to the qualitative evaluation made by the user, and the user makes subjective qualitative evaluation on the quality of the picture in advance. The subjective evaluation method can be divided into two methods: absolute and relative evaluations.
First, the absolute evaluation criteria of the user on the picture quality are introduced.
The absolute evaluation standard of the user on the image quality is that the user evaluates the quality of an image to be determined according to the quality of an original image according to own knowledge and understanding, wherein the quality of the original image is standard quality, namely, the user can not see the quality of the image. Specifically, the picture to be evaluated and the original picture are alternately displayed to a user for observation according to a certain rule, then the user scores the quality of the picture to be evaluated within a certain time after the picture is displayed, and the average value of a plurality of scores given by the user is used as the quality score of the picture to be evaluated. As shown in table 1, table 1 is an absolute evaluation criterion of picture quality.
TABLE 1
Mass scale Interference size Scoring
No degradation of image quality is seen at all Is very good 5 points of
Can see the deterioration of image quality without obstructing the view Good taste 4 is divided into
Clearly showing that the image quality is deteriorated and slightly obstructing the view In general 3 points of
Is obstructed from viewing Difference (D) 2 is divided into
Very serious viewing impediment Very poor 1 minute (1)
As shown in table 1, when the user compares and scores the picture to be evaluated and the original picture, the original picture is a picture with standard quality. Compared with the quality of the original picture, the user can not see the quality of the picture to be evaluated to be deteriorated at all, the interference scale of the picture to be evaluated is very good, and the quality of the picture to be evaluated is divided into 5 points. When the quality of the picture to be evaluated is better than that of the original picture, the quality of the picture to be evaluated is divided into 4 points. When the quality of the picture to be evaluated is compared with that of the original picture, the user clearly sees that the quality of the picture to be evaluated is deteriorated and slightly obstructs the view, the obstruction scale of the picture to be evaluated is corresponding to the general scale, and the quality of the picture to be evaluated is divided into 3 points. When the quality of the picture to be evaluated is compared with that of the original picture, the user thinks that the picture to be evaluated has a hindrance to viewing, the picture hindrance scale to be evaluated is correspondingly poor, and the quality of the picture to be evaluated is divided into 2 points. When the quality of the picture to be evaluated is compared with that of the original picture, the user thinks that the picture to be evaluated is very seriously obstructed to view, the obstruction scale of the picture to be evaluated is correspondingly very poor, and the quality of the picture to be evaluated is divided into 1 point.
Next, the relative evaluation criteria of the user on the picture quality will be described.
The relative evaluation standard of the user on the picture quality is that the user performs mutual evaluation on a batch of pictures to be evaluated, the batch of pictures are sorted from high to low according to the quality of the pictures, and the picture quality scores are given. The relative evaluation criteria of the user on the picture quality adopts a Single Stimulation Continuous Quality Evaluation (SSCQE). The specific method comprises the following steps: and displaying a batch of pictures to be evaluated to a user for watching according to a certain sequence, and scoring the watched pictures when the user watches the batch of pictures. As shown in table 2, table 2 is a relative evaluation criterion of picture quality.
TABLE 2
Relative scale of evaluation Absolute scale of evaluation Scoring
The best of a group Is very good 5 points of
Above the mean level in the group Good taste 4 is divided into
Average level in the group In general 3 points of
Below the average level in the population Difference (D) 2 is divided into
Lowest in the group Very poor 1 minute (1)
As shown in table 2, when the user evaluates a batch of pictures to be evaluated, if the quality of the first picture is the best quality of the batch of pictures to be evaluated compared with other pictures in the batch of pictures to be evaluated, and the absolute evaluation scale of the first picture is very good, the quality score of the first picture is 5. If the quality of the first picture is higher than the average level of the quality of the pictures to be evaluated compared with other pictures in the pictures to be evaluated, and the absolute evaluation scale of the picture is good, the quality score of the picture is 4. If the quality of the first picture is the average level of the quality of the pictures to be evaluated compared with other pictures in the pictures to be evaluated, and the absolute evaluation scale of the picture is general, the quality score of the picture is 3. And if the quality of the first picture is lower than the average level of the quality of the pictures to be evaluated and the absolute evaluation scale of the first picture is poor compared with other pictures in the pictures to be evaluated, the quality score of the first picture is 2. If the quality of the first picture is the lowest quality of the pictures to be evaluated compared with other pictures in the pictures to be evaluated, and the absolute evaluation scale of the picture is very poor, the quality score of the picture is 1.
In a specific implementation, a user can score the quality of a batch of pictures to be evaluated according to an absolute evaluation criterion or a relative evaluation criterion to obtain one or more pictures considered by the user to have the best quality, and the electronic device 100 receives a user operation and stores the one or more pictures at a specified position. The electronic device 100 will complement the first picture according to one or more pictures in the designated location as the complementing resource. After the camera application starts the intelligent completion function, when the first picture collected by the camera needs to be completed, the intelligent recommendation module can complete the first picture collected by the camera according to one or more pictures in the designated position. Therefore, the camera application can complete the first picture collected by the camera according to the selection of the user, and the user experience is improved.
After the intelligent recommending module obtains the completion resources, the intelligent recommending module sends the completion resources to the intelligent completion image processing module, the intelligent completion image processing module receives the completion resources sent by the intelligent recommending module, and the intelligent completion image processing module completes the first picture collected by the camera according to a two-dimensional beautification picture algorithm or a three-dimensional modeling picture algorithm.
Firstly, an intelligent completion image processing module is introduced to complete a first picture acquired by a camera according to a two-dimensional beautification picture algorithm. Specifically, the intelligent completion image processing module places the completion resources in an area where the specified target is located in the first picture, so that the high-definition picture is obtained.
Next, how the intelligent completion image processing module places the completion resource into the area of the first picture where the specified target is located is described.
In some embodiments, the size, the angle, the depth, and the like of the specified target in the complementing resource are not consistent with those of the specified target in the first picture, so that the specified target in the complementing resource needs to be adjusted to be the same as those of the specified target in the first picture.
Firstly, after the intelligent completion image processing module obtains the completion resources, the size of a specified target in the completion resources needs to be adjusted. That is, the size of the target specified in the completed resource is adjusted to be consistent with the size of the target specified in the first picture.
And then, the intelligent completion image processing module performs angle adjustment on the specified target in the completion resource after the size adjustment. Specifically, the intelligent completion image processing module determines an angle by using an included angle between a perpendicular bisector and a vertical perpendicular bisector, which uses a central point of a designated target in the first picture as an origin, to determine whether the angle of the designated target in the first picture is at a left angle or a right angle of the vertical perpendicular bisector. Then, taking the central point of the designated target in the completed resource after the size adjustment as an original point as a perpendicular bisector, the intelligent completion image processing module confirms an included angle between the perpendicular bisector and the vertical perpendicular of the designated target in the completed resource after the size adjustment, and adjusts the included angle between the perpendicular bisector and the vertical perpendicular of the completed resource to be consistent with the included angle between the designated target in the first picture and the vertical perpendicular.
As shown in fig. 6A, fig. 6A exemplarily shows a schematic diagram of a specified target in a first picture. As can be seen from fig. 6A, the perpendicular bisector of the designated target in the first picture is the connecting line between the center points of the eyes and the nose, and the included angle between the perpendicular bisector and the vertical perpendicular is 0 degree.
As shown in fig. 6B, fig. 6B illustrates a diagram of the resized completion resource. As can be seen from fig. 6B, the perpendicular bisector of the designated target in the completed resource is the connecting line between the center points of the eyes and the nose, and the included angle between the perpendicular bisector and the vertical perpendicular line in the completed resource is a °. As shown in fig. 6C, the intelligent completion image processing module rotates the perpendicular bisector of the completion resource by a degree counterclockwise by using the perpendicular bisector as the rotation axis, and adjusts the included angle between the perpendicular bisector and the vertical perpendicular of the completion resource to 0 degree, as shown in fig. 6D, so that the included angle between the perpendicular bisector and the vertical perpendicular of the completion resource is adjusted to be consistent with the included angle between the designated target and the vertical perpendicular in the first picture.
Finally, after the intelligent completion image processing module adjusts the size and the angle of the specified target in the completion resource to be consistent with the size and the angle of the specified target in the first picture, the depth of the specified target in the completion resource needs to be adjusted to be the same as the depth of the specified target in the first picture. The intelligent completion image processing module performs depth adjustment on the completion resources by performing angle adjustment on the side projection image of the specified target in the completion resources around a Z axis, wherein the Z axis is vertical to a horizontal plane (for example, an XOY plane).
Illustratively, the depth of the specified target in the first picture is directed to the depth change of the face when the specified target is the face, due to the face rising, falling or rotating the head around the Z-axis.
As shown in fig. 7A, fig. 7A exemplarily shows a schematic diagram of a three-dimensional stereoscopic image in front of a face in a first picture. When the human face looks ahead, the eyebrow space is X;
as shown in fig. 7B, fig. 7B exemplarily shows a schematic diagram of a three-dimensional stereoscopic image of a human face in a completion resource. As shown in fig. 7B, since the face image rotates clockwise around the Z axis, the depth of the face image in the completion resource is not consistent with the depth of the face image in the first picture, and the eyebrow distance is Y. Therefore, the intelligent completion image processing module needs to calculate an angle difference between the depth of the face image in the completion resource and the depth of the face image in the first picture, and then, adjust the depth of the face image in the completion resource to be 0.
As shown in fig. 7C, when the face looks ahead, the eyebrow distance is X, and the face image rotates clockwise around the Z axis, so that the depth of the face image in the completed resource is inconsistent with the depth of the face image in the first picture, at this time, the eyebrow distance is Y, the intelligent completed image processing module calculates an angle difference B between the depth of the face image in the completed resource and the depth of the face image in the first picture, and cosB is eyebrow distance Y/eyebrow distance X.
And then, the intelligent completion image processing module rotates the face image in the image 7B counterclockwise by B degrees around the Z axis, so that the depth of the face image in the completion resource is adjusted to be 0 in an angle difference value with the depth of the face image in the first picture.
After the intelligent completion image processing module adjusts the designated target in the completion resources to be the same as the size, the angle and the depth of the designated target in the first picture, the intelligent completion image processing module places the completion resources in the area where the designated target is located in the first picture to obtain the high-definition picture. In the high-definition picture, the center point of the specified target in the completion resource is overlapped with the center point of the specified target in the completion area in the first picture. The intelligent completion image processing module pastes the completion resources back to the area of the specified target in the first picture, and after the high-definition picture is obtained, the problem of the overlapped part of the replaced completion resources and the first picture may exist, and the intelligent completion image processing module can directly use detailed colors around the overlapped part for processing.
In other embodiments, the intelligent completion image processing module may further perform completion on the first picture acquired by the camera according to a three-dimensional modeling picture algorithm. Specifically, the intelligent completion image processing module performs three-dimensional modeling on the specified target according to a preset model to obtain a high-definition image.
When the intelligent completion image processing module completes a first picture acquired by a camera according to a three-dimensional modeling picture algorithm in response to the operation of a user, first, the intelligent completion image processing module needs to acquire the existing completion resources containing multiple angles of a specified target from a preset storage area (such as a gallery) according to the characteristics of the specified target identified in the first picture. The completion resource containing the plurality of angles of the specified target may be, for example, a front angle completion resource, a left angle completion resource, a right angle completion resource, and/or a rear angle completion resource.
In some embodiments, the intelligent completion image processing module may obtain the pictures at other angles through angle rotation according to the obtained current angle completion resource, that is, the current angle completion resource is subjected to angle rotation around a Z axis to obtain the pictures at other angles, and the Z axis is perpendicular to the horizontal plane.
For example, the intelligent completion image processing module obtains the left angle completion resource according to the front angle completion resource, obtains the right angle completion resource according to the front angle completion resource, and the like.
For example, as shown in fig. 7D, fig. 7D is a schematic diagram of the front angle completion resource obtained by the intelligent completion image processing module. The intelligent completion image processing module can obtain left angle completion resources according to the obtained front angle completion resources.
As shown in fig. 7D, the intelligent completion image processing module adjusts the front angle completion resource by clockwise rotating by a certain angle (e.g., 90 degrees) around a Z-axis, which is perpendicular to a horizontal plane (e.g., XOY plane), to obtain a left angle completion resource. The intelligent completion image processing module rotates the front angle completion resource by a certain angle (for example, 90 degrees) around the Z axis clockwise, the intelligent completion image processing module projects the rotated front angle completion resource to a Z plane, and the Z plane is perpendicular to a horizontal plane (for example, an XOY plane), so that the left angle completion resource is obtained.
For example, as shown in fig. 7E, after the front angle completion resource is rotated clockwise by a certain angle along the Z axis, the front angle completion resource is projected to the Z plane, so as to obtain the left angle completion resource shown in fig. 7E.
The intelligent completion image processing module rotates the front angle completion resource by a certain angle along the Z-axis in the clockwise direction, namely, the intelligent completion image processing module rotates all pixel points in the front angle completion resource by a certain angle in the clockwise direction, and then the intelligent completion image processing module projects the pixel points of the front angle completion resource which are rotated by a certain angle in the clockwise direction to the Z plane. During projection, the pixel points of the outer contour are reserved, and the pixel points of the inner contour are directly covered.
In some embodiments, since a part of a human body part (e.g., an ear or a hair) of a certain angle of the complementary resource (e.g., a front angle of the complementary resource) is missing, when the certain angle of the complementary resource is rotated along the Z-axis to obtain another angle of the complementary resource (e.g., a left angle of the complementary resource), the intelligent complementary image processing module also misses a part of a human body part (e.g., an ear or a hair) of another angle of the complementary resource (e.g., a left angle of the complementary resource) obtained by the intelligent complementary image processing module. At this time, the intelligent completion image processing module can complete a part of missing human body parts (for example, ears or hairs) in a completion resource (for example, a left-side angle completion resource) of another angle according to the existing big data model, for example, ear model data, hair model data, and the like, so as to achieve the modeling effect.
For example, if the front angle completion resource does not include an ear, when the front angle completion resource is rotated clockwise by a certain angle (e.g., 90 degrees) along the Z axis to obtain the left angle completion resource, the left ear of the left angle completion resource needs to be completed. However, the front angle completion resource does not include an ear, and therefore the intelligent completion image processing module cannot obtain the left ear of the left angle completion resource according to the front angle completion resource. At this time, the intelligent completion image processing module can try to complete the left ear of the left angle completion resource according to the existing big data model, such as ear model data, so as to achieve the modeling effect.
Or, the intelligent completion image processing module may obtain the right angle completion resource according to the obtained front angle completion resource.
As shown in fig. 7F, the intelligent completion image processing module adjusts the front angle completion resource by rotating counterclockwise by a certain angle (e.g., 90 degrees) around a Z-axis, which is perpendicular to a horizontal plane (e.g., XOY plane), to obtain a right angle completion resource.
The intelligent completion image processing module rotates the front angle completion resource around the Z axis by a certain angle (for example, 90 degrees) according to the counterclockwise direction, the intelligent completion image processing module projects the rotated front angle completion resource to a Z plane, the Z plane is vertical to a horizontal plane (for example, an XOY plane), so as to obtain a right angle completion resource,
illustratively, as shown in fig. 7G, after the front image is rotated by a certain angle in the counterclockwise direction along the Z-axis, the right angle completion resource shown in fig. 7G is obtained.
The intelligent completion image processing module rotates the front angle completion resource by a certain angle along the Z-axis in the anticlockwise direction, namely, the intelligent completion image processing module rotates all pixel points in the front angle completion resource by a certain angle in the anticlockwise direction, and then the intelligent completion image processing module projects the pixel points to the Z plane. During projection, the pixel points of the outer contour are reserved, and the pixel points of the inner contour are directly covered.
In some embodiments, since a part of a human body part (e.g., an ear or a hair) of a certain angle of the complement resource (e.g., a right angle of the complement resource) is missing, when the certain angle of the complement resource is rotated along the Z-axis to obtain another angle of the complement resource (e.g., a right angle of the complement resource), the intelligent complement image processing module also misses a part of a human body part (e.g., an ear or a hair) of another angle of the complement resource (e.g., a right angle of the complement resource) obtained by the intelligent complement image processing module. At this time, the intelligent completion image processing module may try to complete a part of a human body (for example, an ear or a hair) missing from a completion resource of another angle (for example, a right-side completion resource) according to an existing big data model, such as ear model data, hair model data, and the like, so as to achieve a modeling effect.
For example, if the front angle completion resource does not include an ear, when the front angle completion resource rotates clockwise by a certain angle (for example, 90 degrees) along the Z axis to obtain the right angle completion resource, the right ear of the right angle completion resource needs to be completed. However, the front angle completion resource does not include an ear, so the intelligent completion image processing module cannot obtain the right ear of the right angle completion resource according to the front angle completion resource. At this time, the intelligent completion image processing module can try to complete the right ear of the right angle completion resource according to the existing big data model, such as ear model data, so as to achieve the modeling effect.
Similarly, the intelligent completion image processing module can also obtain completion resources of another angle (for example, a front angle and a right angle) through angle rotation according to the completion resources of another angle (for example, a left angle), which is not described herein again.
After the intelligent completion image processing module obtains completion resources of a plurality of angles including a specified target, the intelligent completion image processing module carries out modeling according to a preset three-dimensional model (such as a three-dimensional head model or a three-dimensional house model) to obtain a three-dimensional high-definition picture.
It will be appreciated that the predetermined three-dimensional model is already available, and that different classes of specified objects have different three-dimensional modeling models. Such as a three-dimensional human head model, a three-dimensional house model, and a three-dimensional animal dog model, among others.
The following describes a picture completion method provided in the embodiment of the present application with reference to an application scenario.
In some embodiments, when a user takes a picture using the electronic device 100, the sharpness of the picture taken is low due to object motion or user hand jitter. In order to prevent the electronic device 100 from having a low definition of a shot picture due to object motion or hand shake of a user, the electronic device 100 may start an intelligent completion function before the picture is shot, the electronic device 100 automatically identifies a specified target in the obtained first picture and extracts a feature of the specified target, and obtains a completion resource matched with the feature of the specified target from a resource library based on the feature of the specified target, and pastes the completion resource back to an area where the specified target in the first picture is located, so as to obtain a high-definition picture and store the high-definition picture in the picture library. In this way, the electronic device 100 can improve the definition of the image taken by the electronic device 100 according to the real picture resources.
The intelligent completion function is a picture restoration capability provided by the electronic device 100. After the electronic device 100 starts the intelligent completion function, the electronic device 100 may extract the feature of the specified target identified in the acquired picture, acquire a high-definition image matched with the feature of the specified target from the resource library based on the feature of the specified target, and paste the high-definition image back to the area where the specified target in the first picture is located, thereby achieving the capability of restoring the first picture according to the real picture resources.
Illustratively, FIG. 8A illustrates the user interface 30 of the electronic device 100. The user interface 30 may include icons for some applications. For example, an icon 309 of a clock, an icon 311 of a calendar, an icon 313 of a gallery, an icon 317 of file management, an icon 319 of an email, an icon 321 of music, an icon 325 of Huawei, an icon 327 of sports health, an icon 329 of weather, an icon 330 of a camera, an icon 331 of a contact list, an icon 332 of a telephone, and an icon 333 of information. In some embodiments, the user interface 30 may include more or fewer icons for applications. In some embodiments, icons for applications other than those shown in FIG. 8A, such as an instant messenger application, and the like, may be included in the user interface 30. And is not limited herein.
Illustratively, as shown in fig. 8A, the icon 330 of the camera may receive a user click operation, and in response to the user click operation, the electronic device 100 displays the user interface 40 shown in fig. 8B. The user interface 40 illustrates a live preview screen captured by some of the function icons, controls, and cameras. For example, the user interface 40 further includes a real-time preview screen 400 captured by a camera, such as a large aperture icon 401, a night view icon 402, a portrait icon 403, a photograph icon 404, a record icon 405, a professional icon 406, a more icon 407, a floodlight icon 408, a filter icon 409, a smart fill icon 410, a setting icon 411, a playback control 412, a shooting control 413, and a pre-and post-switching camera control 414.
As shown in fig. 8B, the electronic device 100 may receive an input operation (e.g., a single click) from the user with respect to the smart patch icon 410 in the user interface 40, and in response to the input, as shown in fig. 8C, the electronic device 100 turns on the smart patch function. The default of the electronic device 100 for starting the intelligent completion function is to perform completion on the first picture acquired by the camera in a two-dimensional beautification picture mode to obtain a high-definition picture.
In some embodiments, when the user needs to process the first picture acquired by the camera through the three-dimensional modeling picture mode in the camera application to obtain a high-definition three-dimensional picture, the electronic device 100 may receive an operation of the user and select the three-dimensional modeling picture mode to process the first picture acquired by the camera to obtain the high-definition three-dimensional picture.
Fig. 8D to 8F are diagrams illustrating the electronic apparatus 100 receiving an operation of the user to select the three-dimensional modeling picture mode.
Illustratively, as shown in fig. 8D, the setting icon 411 may receive a user click operation, and in response to the user click operation, the electronic device 100 displays the setting interface 50 as shown in fig. 8E. The setting interface 50 includes a shooting mute control 415, and the shooting mute function is turned on; a timed shooting control 416 for turning off the timed shooting function; a voice-controlled photographing control 417, which closes the voice-controlled photographing function; a volume key function control 419, the volume key function being a shutter; screen-off snapshot control 420, that is, if electronic device 100 receives a user double-click volume down key in the screen-locked state, electronic device 100 starts the camera and takes a picture; setup interface 50 also includes smart completion control 421. When the user wants to select the three-dimensional modeling picture mode, the intelligent completion control 421 may receive a user click operation, and in response to the user click operation, the electronic device 100 displays the setting interface 60 shown in fig. 8F.
As shown in fig. 8F, the setup interface 60 includes a smart completion priority control 422 and a three-dimensional smart completion mode control 423. The intelligent completion priority control 422 may receive a click operation of a user to select an intelligent completion area. The three-dimensional intelligent completion mode is displayed in an off state, the three-dimensional intelligent completion mode control 423 can receive clicking operation of a user, and the three-dimensional intelligent completion mode is displayed in an on state, that is, the electronic device 100 processes a first picture acquired by a camera through a three-dimensional modeling picture mode in camera application to obtain a high-definition three-dimensional picture.
After the electronic device 100 starts the intelligent completion function, the default of the electronic device 100 starting the intelligent completion function is to perform completion on a first picture acquired by a camera by adopting a two-dimensional beautification picture mode to obtain a high-definition picture. As shown in fig. 8G, the electronic device 100 may receive an input operation (e.g., a single click) from a user with respect to the shooting control 413 in the user interface 40, and in response to the input, the electronic device 100 acquires an original image captured by a camera, where the original image is a first picture.
The electronic device 100 identifies the designated target in the first picture and the area where the designated target is located in the first picture according to a preset model, and performs feature extraction on the designated target. Here, the electronic apparatus 100 completes all the designated objects recognized in the first picture and the areas where the designated objects are located by default. Then, the electronic device 100 acquires a completed resource matched with the feature of the specified target from the real picture resource in the resource library based on the feature of the specified target, and posts the completed resource back to the area where the specified target is located in the first picture.
In some embodiments, the completed resource obtained by the electronic device 100 may be one preselected picture resource with the highest average score of the multiple dimensional quality scores of the resource from the preselected pictures by the electronic device 100.
Specifically, please refer to the above embodiments, and this application will not be described in detail herein, how the electronic device 100 obtains the matched completion resource from the real picture resource in the resource library according to the feature of the specified target in the first picture, and pastes the completion resource back to the region where the specified target is located in the first picture.
In a possible implementation manner, after the completing resource is pasted back to the area where the designated target of the first picture is located, the electronic device 100 may directly store the picture after the intelligent completing to the gallery. The back-display control 412 can receive an input operation (e.g., a single click) by the user, and in response to the input operation by the user, the electronic device 100 will display the photo browsing interface 70 shown in fig. 8H.
As shown in fig. 8H, the photo browsing interface 70 includes a high-definition picture 510 after the intelligent completion, information 520 of the high-definition picture 510 after the intelligent completion, and a function option 530. The information 520 of the high-definition picture 510 after the intelligent completion includes a shooting place, a shooting date, a shooting time, and the like. For example, the shooting place may be "shenzhen bay", the shooting date is "yesterday", the shooting time is "17: 28". The functionality options 530 may include a share control, a favorites control, a delete control, and more controls, among others.
In another possible implementation manner, after the completing resource is pasted back to the area where the specified target of the first picture is located by the electronic device 100, the electronic device 100 simultaneously displays the original image acquired by the camera and the image obtained after the intelligent completing of the original image on the user interface, and the user may select to store one or more pictures displayed in the user interface.
Specifically, the electronic device 100 may receive an input operation (e.g., a single click) from the user with respect to the shooting control 412 in the user interface 40, and in response to the input, the electronic device displays the picture browsing interface 540 shown in fig. 8I. The picture browsing interface 540 includes an original image 550 captured by a camera, an image 560 after the original image is intelligently completed, and a save control 570. The original image 550 acquired by the camera and the image 560 after the original image is intelligently complemented can both receive click selection operation of a user. Thereafter, the electronic device 100 may receive an input operation (e.g., a single click) from the user with respect to the saving control 570 in the user interface 40, and in response to the input, the electronic device 100 saves the original image 550 captured by the camera and the image 560 after the smart completion of the original image into the album. Alternatively, image 560 after the smart completion of the original image may receive a single click selection operation by the user. Thereafter, the electronic device 100 may receive an input operation (e.g., a single click) from the user with respect to the saving control 570 in the user interface 40, and in response to the input, the electronic device 100 saves the image 560 after intelligently completing the original image into the album. Alternatively, the original image 550 captured by the camera may receive a single click operation by the user. Thereafter, the electronic device 100 may receive an input operation (e.g., a single click) from the user with respect to the save control 570 in the user interface 40, and in response to the input, the electronic device 100 saves the original image 550 captured by the camera to the album.
In other embodiments, the electronic device 100 scores the pre-image resources according to the different dimensions to obtain a complementary resource with the highest quality score in the preset dimensions. In this way, the electronic device 100 can obtain a completion resource with the highest quality score for each dimension, and the diversity of picture completion is improved.
Illustratively, the preset dimension may be color, definition, texture, anti-shake, and the like.
Specifically, the electronic device 100 may receive an input operation (e.g., a single click) from the user with respect to the capture control 412 in the user interface 40, and in response to the input, the electronic device displays the picture browsing interface 580 shown in fig. 8J. Picture browsing interface 580 includes image 5801 after the first image is intelligently complemented, image 5802 after the first image is intelligently complemented, image 5803 after the first image is intelligently complemented, image 5804 after the first image is intelligently complemented, and save control 590. One or any several of the image 5801, the image 5802, the image 5803 and the image 5804 can receive a single-click operation of a user. Thereafter, the electronic device 100 may receive an input operation (e.g., a single click) from the user with respect to the save control 580 in the user interface 40, and in response to the input, the electronic device 100 saves the one or more images selected by the user into the album.
Note that the image 5801, the image 5802, the image 5803, and the image 5804 are obtained after the first picture is intelligently complemented according to different complementing resources. For example, the image 5801 may be obtained after the electronic device 100 intelligently completes the first picture according to the preselected picture resource with the highest color score; the image 5802 may be obtained after the electronic device 100 intelligently completes the first picture according to the pre-selected picture resource with the highest definition score; the image 5803 may be obtained after the electronic device 100 intelligently completes the first picture according to the preselected picture resource with the highest texture score; image 5804 may be the result of electronic device 100 intelligently complementing the first picture according to the preselected picture resource with the highest anti-shake score.
The above-described embodiments illustrated in fig. 8G to 8J exemplarily show UI diagrams in which the electronic device 100 completes all the designated targets identified in the first picture and the area where the designated target is located. In some embodiments, the electronic device 100 may receive a user operation to set the priority of the completion area in the first picture.
Fig. 8K to 8M exemplarily show diagrams of the electronic apparatus 100 receiving a user operation to set the priority of the complementing area in the first picture.
As shown in fig. 8K, the electronic device 100 receives a single-click operation on the smart completion priority control 422, and in response to the single-click operation on the smart completion priority control 422, the electronic device 100 displays the setting interface 80 shown in fig. 8L. Among other things, the setup interface 80 includes an independent priority control 424, a setup completion order control 425, and a merge priority control 426. The independent priority mode is off, i.e., the electronic device 100 only completes the designated object with the highest category priority among the designated objects identified in the first picture. The merging priority mode display is turned on, and the merging priority, that is, the electronic device 100 takes the areas where all the designated targets identified in the first picture are located as the completion areas. Electronic device 100 may receive a user's single-click operation of setting completion order control 425, and in response to the user's single-click operation, electronic device 100 will display setting interface 90 as shown in fig. 8M.
Wherein the setting interface 90 exemplarily shows the priority of the designated target, the electronic device 100 will complement the designated target in the first picture from high to low according to the priority. As shown in fig. 8M, the first prioritized character is a character, and the character corresponds to a drag control 427; the second prioritized animal corresponds to a drag control 428; the tree with the third priority is provided with a dragging control 429; the fourth in priority is the house, and the shelves correspond to a dragging control 430; the fifth priority is the moon, and the moon corresponds to the dragging control 431; the sixth priority is a packet, which corresponds to drag control 432.
Illustratively, when the plurality of designated objects automatically recognized by the electronic device 100 from the first picture are persons, animals, flowers, houses, bags, trees, the electronic device 100 only needs to complement the recognized persons in the first picture since the priorities of the persons are the highest in the order of priority of the designated objects set in the electronic device 100. When the plurality of designated objects automatically recognized by the electronic device 100 from the first picture are animals, flowers, houses, bags, trees, according to the priority order of the designated objects set in the electronic device 100, the electronic device 100 only needs to complete the recognized animals in the first picture because the electronic device 100 does not recognize the persons in the first picture but recognizes the animals.
The electronic apparatus 100 may also accept an operation of the user to set the priority of each designated target.
Illustratively, when the user needs to drop the priority of the moon to the first place, the drag widget 431 may receive the drag operation of the user to swap the positions of the moon and the character, such that the priority of the moon is ranked first and the priority of the character is ranked fifth. Therefore, the user can set the priority of each designated target according to the requirement of the user, and the user experience is improved.
In some embodiments, when a user takes a picture using the electronic device 100, the sharpness of the picture taken is low due to object motion or user hand jitter. In order to prevent the electronic device 100 from having a low definition of a shot picture due to object motion or hand shake of a user, the electronic device 100 may start an intelligent completion function before the shot picture is shot, and the electronic device 100 determines a completion area in a first picture according to a sliding track of the user and extracts features in the completion area. The electronic device 100 acquires the completion resource matched with the characteristics of the specified target from the resource library based on the characteristics of the specified target, and pastes the completion resource back to the area where the specified target in the first picture is located, so as to obtain a high-definition picture and store the high-definition picture in the picture library. In this way, the electronic device 100 can improve the definition of the image taken by the electronic device 100 according to the real picture resources.
As shown in fig. 8N, fig. 8N exemplarily shows the user interface 40 of the camera, and please refer to the embodiment shown in fig. 8B for the description of the user interface 40 of the camera, which is not repeated herein.
Fig. 8N illustrates a live preview screen 400 captured by a camera, and the electronic device 100 may receive a sliding operation of the user on the live preview screen 400 to determine the completion area.
In response to the slide operation on the live preview screen 400, the electronic apparatus 100 displays the object frame 430 shown in fig. 8O. The electronic device 100 determines the area where the target box 430 is located as a completion area.
As shown in fig. 8O, the electronic device 100 may receive an input operation (e.g., a single click) from a user with respect to the shooting control 413 in the user interface 40, and in response to the input, the electronic device 100 acquires an original image captured by a camera, where the original image is a first picture.
The electronic device 100 will perform feature extraction on the specified object within the object box 430. Then, the electronic device 100 acquires a completed resource matched with the feature of the specified target from the real picture resource in the resource library based on the feature of the specified target, and pastes the completed resource back to the area where the specified target is located in the first picture.
Specifically, please refer to the above embodiments, and this application will not be described in detail herein, how the electronic device 100 obtains the matched completion resource from the real picture resource in the resource library according to the feature of the specified target in the first picture, and pastes the completion resource back to the region where the specified target is located in the first picture.
In a possible implementation manner, after the completing resource is pasted back to the area where the target frame 430 of the first picture is located by the electronic device 100, the electronic device 100 may directly store the picture after the intelligent completing to the gallery. The back-display control 412 can receive an input operation (e.g., a single click) by the user, and in response to the input operation by the user, the electronic device 100 will display the photo browsing interface 1000 shown in fig. 8P.
As shown in fig. 8P, the photo browsing interface 1000 includes a high-definition picture 1020 after the intelligent completion, information 1030 of the high-definition picture 510 after the intelligent completion, and function options 1020. The information 1030 of the high-definition picture 1020 after the intelligent completion comprises a shooting place, a shooting date, a shooting time and the like. For example, the shooting place may be "shenzhen bay", the shooting date is "yesterday", the shooting time is "17: 28". The functionality options 1010 may include a share control, a favorite control, a delete control, and more controls, among others.
In another possible implementation manner, after the completing resource is pasted back to the area where the target frame 430 of the first picture is located by the electronic device 100, the electronic device 100 simultaneously displays the original image acquired by the camera and the image after the intelligent completing of the original image on the user interface, and the user may select to store one or more pictures displayed in the user interface.
As shown in fig. 8Q, the electronic device 100 may receive an input operation (e.g., a single click) by the user with respect to the photographing control 412 in the user interface 40, and in response to the input, the electronic device displays a picture browsing interface 1100 shown in fig. 8P. Picture browsing interface 1100 includes camera-captured original image 1110, image 1120 after intelligently completing the original image, and save control 1130. The original image 1110 collected by the camera and the image 1120 obtained by intelligently completing the original image can both receive click selection operation of a user. Thereafter, the electronic device 100 may receive an input operation (e.g., a single click) from a user with respect to the saving control 1130 in the user interface 40, and in response to the input, the electronic device 100 saves the original image 1110 captured by the camera and the image 1120 after the original image is intelligently complemented into the album. Alternatively, image 1120 after intelligently completing the original image may receive a single click selection operation by the user. Thereafter, the electronic device 100 may receive an input operation (e.g., a single click) from the user with respect to the saving control 1130 in the user interface 40, and in response to the input, the electronic device 100 saves the image 1120 after intelligently completing the original image into the album. Alternatively, the original image 1110 captured by the camera may receive a single-click operation by the user. Thereafter, the electronic device 100 may receive an input operation (e.g., a single click) from the user with respect to the save control 1130 in the user interface 40, and in response to the input, the electronic device 100 saves the original image 1110 captured by the camera into the album.
In other embodiments, the electronic device may receive a single click operation of the three-dimensional smart completion mode control 423 from the user, and the electronic device 100 switches the two-dimensional smart completion mode to the three-dimensional smart completion mode. Thereafter, the electronic apparatus 100 may identify the supplementary area according to the sliding trajectory of the user. Thereafter, the shooting control 413 of the user interface 40 may receive an input operation (e.g., a single click) from the user, and the electronic device 100 acquires an original image captured by the camera, where the original image is a first picture. The electronic device 100 will perform feature extraction on the specified target within the completion area. Then, the electronic device 100 acquires, from the real picture resources in the resource library, a completed resource matching the feature of the specified target based on the feature of the specified target, where the completed resource includes pictures of multiple angles matching the feature of the first picture. For example, the completion resources may include a front angle picture, a left angle picture, and a right angle picture and/or a rear angle picture, and so on. After the completion resource is obtained, the electronic device 100 obtains a three-dimensional stereo picture of the first picture according to the preset three-dimensional model and the completion resource. Then, the electronic device 100 pastes the three-dimensional stereo picture back to the completion area in the first picture.
In some embodiments, when the user uses the electronic device 100 to take a picture, the electronic device 100 may start the intelligent completion function before taking the picture, and the electronic device 100 determines the completion region in the first picture according to the sliding track of the user and completes the completion region in the first picture according to the completion resource selected by the user. The electronic device 100 posts the completion resource selected by the user back to the area where the designated target in the first picture is located, so as to obtain a high-definition picture and store the high-definition picture in the gallery. In this way, the electronic device 100 can complement the first picture according to the complement resource different from the feature of the specified target in the complement region, thereby realizing diversity of picture complementation.
Referring to the embodiments shown in fig. 8N to 8O, the electronic device 100 receives a sliding operation of a user to determine that an area where the target frame 430 is located in the first picture is a complete area.
After the electronic device 100 determines the completion area according to the sliding track of the user, the electronic device 100 displays a prompt box 900 as shown in fig. 9A.
A plurality of completion resources are included within prompt box 900. The plurality of completion resources may include a completion resource 901, a completion resource 902, a completion resource 903, and a default completion resource 904, among others.
The characteristics of the completed resource 901, the completed resource 902, and the completed resource 9033 are not consistent with the characteristics of the specified target in the completed region. The user can select the completion resource 901, the completion resource 902, and the completion resource 903 to complete the completion area in the first picture. When the user wants to complete the completion region in the first picture by the completion resource with the same characteristics, the user may select the default completion resource 904 in response to the default completion resource 904 selected by the user. The electronic device 100 matches the completion resource consistent with the specified target feature of the completion region in the first picture from the resource library, and completes the completion region in the first picture according to the completion resource.
As shown in fig. 9A, in response to the electronic device 100 receiving and responding to the operation of the user selecting the completion resource 3, the electronic device 100 completes the area of the target frame 430 in the first picture according to the completion resource 3.
As shown in fig. 9B, the electronic device 100 may receive an input operation (e.g., a single click) from a user with respect to the shooting control 413 in the user interface 40, and in response to the input, the electronic device 100 acquires an original image captured by a camera, where the original image is a first picture.
The electronic device 100 posts the completion resource 3 back to the area where the target frame 430 is located in the first picture, so as to obtain a high-definition picture.
In a possible implementation manner, after the electronic device 100 posts the completion resource 3 back to the area where the target frame 430 of the first picture is located, the electronic device 100 may directly store the picture after the intelligent completion into the gallery. The back-display control 412 can receive an input operation (e.g., a single click) by the user, and in response to the input operation by the user, the electronic device 100 will display the photo browsing interface 1200 shown in fig. 9C.
As shown in fig. 9C, the photo browsing interface 1000 includes a high-definition picture 1220 after the intelligent completion, information 1230 of the high-definition picture 510 after the intelligent completion, and a function option 1210. The information 1230 of the high-definition picture 1220 after the intelligent completion comprises a shooting place, a shooting date, a shooting time and the like. For example, the shooting place may be "shenzhen bay", the shooting date is "yesterday", the shooting time is "17: 28". The functionality options 1010 may include a share control, a favorite control, a delete control, and more controls, among others.
In another possible implementation manner, after the completing resource is pasted back to the area where the target frame 430 of the first picture is located by the electronic device 100, the electronic device 100 simultaneously displays the original image acquired by the camera and the image after the intelligent completing of the original image on the user interface, and the user may select to store one or more pictures displayed in the user interface.
Electronic device 100 may receive an input operation (e.g., a single click) from a user with respect to capture control 412 in user interface 40, in response to which electronic device displays picture browsing interface 1300 of fig. 9D. The picture browsing interface 1300 includes an original image 1310 (first picture) captured by a camera, an image 1320 (second picture) after the original image is intelligently complemented, and a save control 1330 (save control). Both the original image 1310 acquired by the camera and the image 1320 obtained after the original image is intelligently complemented can receive click selection (seventh operation) operation of the user. Thereafter, the electronic device 100 may receive an input operation (an eighth operation) (for example, a single click) from the user with respect to the saving control 1330 in the user interface 40, and in response to the input, the electronic device 100 saves the original image 1310 captured by the camera and the image 1320 after the smart completion of the original image into the album. Alternatively, the image 1320 after the smart completion of the original image may receive a single click selection operation by the user. Thereafter, the electronic device 100 may receive an input (e.g., a single click) from the user with respect to the saving control 1330 in the user interface 40, and in response to the input, the electronic device 100 saves the image 1320 after intelligently completing the original image into the album. Alternatively, the original image 1310 captured by the camera may receive a single click selection operation by the user. Thereafter, the electronic device 100 may receive an input operation (e.g., a single click) from the user with respect to the save control 1330 in the user interface 40, and in response to the input, the electronic device 100 saves the original image 1310 captured by the camera to the album.
In some embodiments, a gallery in the electronic device 100 may hold multiple pictures. Some pictures in the gallery have low sharpness due to the photographer's hand shaking when taking the picture or due to low resolution of the camera. The electronic device 100 may receive an operation of a user to start the smart replenishment function. The electronic device 100 identifies the area of the designated target and the characteristics of the designated target in the first picture, finds the completion resource matched with the characteristics of the designated target in the resource library according to the characteristics of the designated target, pastes the completion resource back to the area of the designated target in the first picture to obtain the high-definition picture, and stores the high-definition picture in the picture library. Or the first picture has a poor shooting effect, for example, the person squints, red eyes, legs are short, and the like, and the first picture can identify an area in the first picture that needs to be completed, for example, an area where the eyes of the person are located in the first picture, according to a sliding track of the user on the first picture. Then, the electronic device 100 extracts the feature of the specified target in the completion area, acquires the completion resource matched with the feature of the specified target from the resource library based on the feature of the specified target, and pastes the completion resource back to the area where the specified target in the first picture is located to beautify the first picture. In this way, the electronic device 100 can beautify the designated target of the partial area in the first picture according to the needs of the user. On one hand, the problem that the first picture is distorted when the first picture is processed by using an image processing algorithm due to the fact that the picture is originally fuzzy can be solved; on the other hand, the electronic device 100 may determine an area that needs to be completed by the user according to the sliding operation of the user, beautify the first picture, and improve user experience.
In some embodiments, the electronic device 100 may receive an operation of a user to turn on the smart replenishment function. The electronic device 100 identifies the area of the designated target and the characteristics of the designated target in the first picture, finds the completion resource matched with the characteristics of the designated target in the resource library according to the characteristics of the designated target, pastes the completion resource back to the area of the designated target in the first picture to obtain the high-definition picture, and stores the high-definition picture in the picture library.
Illustratively, as shown in fig. 10A, the electronic device 100 may receive an input operation (e.g., a single click) by a user on the gallery application icon 313 in the user interface 30, and in response to the input operation, the electronic device 100 may display a gallery user interface 610 as shown in fig. 10B.
As shown in fig. 10B, the gallery user interface 610 may display a thumbnail including one or more pictures. In particular, gallery user interface 610 may include an abbreviation and icon "all photos" for all photos, the number of pictures for all photos being 2465; the gallery user interface 610 also includes thumbnails of people and the icon "people," the number of pictures of a person being 654; the gallery user interface 610 also includes thumbnails of scenes with a number of pictures of 368 and the icon "scene"; the gallery user interface 610 also includes a thumbnail of an animal, the number of pictures for the animal being 158, and the icon "animal".
The electronic apparatus 100 may receive an input operation (e.g., a single click) by the user on the animal thumbnail, and in response to the input operation, the electronic apparatus 100 may display a picture browsing interface 620 as shown in fig. 10C.
The image browsing interface 620 includes a first image 6201, information 6202 of the first image 6201, and a function option 6203. The information 6202 of the first picture 6201 includes a shooting location, a shooting date, a shooting time, and the like. For example, the shooting place may be "shenzhen bay", the shooting date is "yesterday", the shooting time is "17: 28". The functionality options 6203 may include a share control 6204, a favorites control 6205, a picture completion control 6206, a delete control 6207, and a more control 6208, among others.
The electronic device 100 may receive an input operation (e.g., clicking) performed by the user on the image completion control 6206, in response to the input operation by the user, the electronic device 100 may identify the specified target and the area where the specified target is located, extract a feature of the specified target in the first image 6201, extract, based on the feature of the specified target in the first image 6201, a completion resource matching the feature of the specified target in the first image 6201 from the resource library, and paste the completion resource back to the area where the specified target is located in the first image 6201, so as to obtain the image browsing interface 630 shown in fig. 10D.
The image browsing interface 630 includes a high-definition image 6209 (second image), information 6202 of the high-definition image 6209, and a function option 6203. The information 6202 of the high-definition picture 6209 includes a shooting location, a shooting date, a shooting time, and the like. For example, the shooting place may be "shenzhen bay", the shooting date is "yesterday", the shooting time is "17: 28". The functionality options 6203 may include a share control 6204, a favorites control 6205, a picture completion control 6206, a delete control 6207, and a more control 6208, among others.
In one possible implementation, the electronic device 100 replaces the first picture 6201 with the high-definition picture 6209, and saves the high-definition picture 6209 to the gallery.
In another possible implementation manner, the electronic device 100 obtains the high-definition picture 6209 according to the first picture 6201, and the electronic device 100 simultaneously saves the first picture 6201 and the high-definition picture 6209 in the gallery.
In other embodiments, because the first picture has a poor shooting effect, for example, a person squints, red eyes, legs are short, and the like, the electronic device 100 may identify a region in the first picture that needs to be completed, for example, a region in the first picture where the eyes of the person are located, according to a sliding track of the user on the first picture. Then, the electronic device 100 extracts the feature of the specified target in the completion area, acquires completion resources matched with the feature of the specified target from the resource library based on the feature of the specified target, and pastes the completion resources back to the area where the specified target in the first picture is located to beautify the first picture.
The electronic device 100 obtains a first picture, starts an intelligent completion function, and then the electronic device 100 beautifies the first picture acquired by the camera of the electronic device 100 according to a two-dimensional beautification algorithm by default.
As shown in fig. 10E, two-dimensional picture beautification can be classified as character beautification, landscape beautification, and the like. Two-dimensional picture beautification may also include animal beautification, building beautification, and the like. The method and the device do not limit the beautification category of the two-dimensional picture.
The present application is illustrative of duel landscaping and should not be construed as limiting.
When the person or the part of the person in the first picture needs to be supplemented, the electronic device 100 may determine the supplemented area in the first picture according to the sliding track of the user, where the supplemented area in the first picture may be an area where the head of the person is located, an area where the eyes, mouth, or nose of the person are located, an area where the neck of the person is located, an area where the legs of the person are located, an area where the arms of the person are located, or the like.
When the landscape in the first picture needs to be supplemented, the electronic device 100 may determine the supplemented area in the first picture according to the sliding track of the user, where the supplemented area in the first picture may be an area where a building (e.g., an iron tower) is located, an area where characters are located, an area where a background is located, and the like. In this way, the electronic device 100 can beautify the completed area in the first picture according to the completed resource with better quality in the resource library, thereby improving the visual effect of the first picture.
For example, when the user feels that the eyes in the first picture are too small. Or squinting, or short legs in the first picture, etc. The electronic device 100 may determine the completion area in the first picture according to the sliding track of the user. And finding out the full complement resource with better quality from the resource library to beautify the first picture. On one hand, the problem that the first picture is distorted when the first picture is processed by using an image processing algorithm due to the fact that the picture is originally fuzzy can be solved; on the other hand, the electronic device 100 may determine the area that needs to be completed by the user according to the sliding operation of the user, so as to improve the user experience. For example, when the completion area is an area where the eyes of the person are located, the electronic device 100 acquires a completion resource matching the characteristics of the eyes of the person from the resource library based on the characteristics of the eyes of the person, and the electronic device 100 selects the eyes of the person with better quality in the resource library as the completion resource according to a preset algorithm. For example, the electronic device 100 may select a resource pool in which the eyes of the better quality person are more distinct and larger than the eyes of the person in the first picture. When the eyes of the person in the first picture are not red-eye or narrow-eye, the eyes of the person in the first picture select the eyes of the person with better quality in the resource library according to a preset algorithm to replace the eyes of the person in the first picture as a supplement resource, so that the effect of beautifying the person in the first picture can be achieved.
As shown in FIG. 10F, FIG. 10F illustrates a picture viewing interface 1020. The picture browsing interface 1020 includes the first picture 1000, information 1010 of the first picture 1000, and a function option 6203. The information 1010 of the first picture 1000 includes a shooting location, a shooting date, a shooting time, and the like. For example, the shooting place may be "shenzhen bay", the shooting date is "yesterday", the shooting time is "17: 18". The functionality options 6203 may include a share control 6204, a favorites control 6205, a picture completion control 6206, a delete control 6207, and a more control 6208, among others.
When the user feels the task eyes in the first picture 1000 too small, or squinting. In red eyes, the electronic device 100 may receive a sliding operation of the user on the first picture 1000, and identify a complete area, such as an area where the human eyes are located, according to the sliding track.
For example, as shown in fig. 10G, the electronic device 100 may receive a sliding operation performed by the user on the first picture 1000, and determine that the area where the eyes of the person are located in the first picture 1000 is a completion area. As shown in fig. 10H, in response to the sliding operation, the electronic device 100 will display the target frame 1030 on the first picture 1000. The area in which the target frame 1030 is located includes the area in which the eyes of the person are located.
It should be noted that, in some embodiments, the electronic device 100 may not display the target frame 1030 on the first picture 1000, and is not limited herein.
Thereafter, as shown in fig. 10H, the electronic device 100 may receive an input operation (e.g., a single click) of the picture completion control 6206 by the user, in response to the input operation by the user, the electronic device 100 extracts a feature of a specified target (e.g., an eye of a person) in the target frame 1030, extracts a completion resource matching the feature of the specified target in the first picture 100 from the resource library based on the feature of the specified target (e.g., the eye of the person) in the first picture 1000, and pastes the completion resource back to an area where the specified target is located in the first picture 1000, so as to obtain the picture browsing interface 1040 shown in fig. 10I.
The picture browsing interface 1040 includes a picture 1050, information 1010 of the picture 1050, and a function option 6203. The information 1010 of the picture 1050 includes a shooting location, a shooting date, a shooting time, and the like. For example, the shooting place may be "shenzhen bay", the shooting date is "yesterday", the shooting time is "17: 28". The functionality options 6203 may include a share control 6204, a collection control 6205, a picture completion control 6206, a delete control 6207, and more controls 6208, among others.
In one possible implementation, electronic device 100 replaces first picture 1000 with picture 1050 and saves first picture 1050 to the gallery.
In another possible implementation manner, the electronic device 100 obtains the picture 1050 according to the first picture 1000, and the electronic device 100 stores the first picture 1000 and the picture 1050 in the gallery at the same time.
Fig. 10A-10I exemplarily show that, when the electronic device 100 needs to complete a picture in a gallery, the picture completion control 6206 in the picture browsing interface 620 may receive an input operation (e.g., a single click) from the user, and in response to the input operation (e.g., the single click) from the user, the electronic device 100 will complete a specified target in a completion area in a first picture of the electronic device 100 according to a two-dimensional smart completion mode.
When the user wants to complement the first picture using three-dimensional complementation, as shown in fig. 10J, the electronic device 100 may receive an input operation (e.g., single click) by the user on a more control 6208 in the user interface 620, and the electronic device 100 displays an interface 6209 shown in fig. 10K. Interface 6209 includes a three-dimensional smart completion mode icon 6210, which closes.
As shown in fig. 10J, the three-dimensional smart completion mode icon 6210 may receive an input operation (e.g., a single click) from a user, and in response to the input operation (e.g., the single click) from the user, the electronic device 100 switches the two-dimensional smart completion mode to the three-dimensional smart completion mode. Thereafter, the picture completion control 6206 in the picture browsing interface 620 may receive an input operation (e.g., a single click) of the user, and in response to the input operation (e.g., the single click) of the user, the electronic device 100 completes the specified target in the completion area in the first picture of the electronic device 100 according to the two-dimensional smart completion mode. The electronic device 100 will perform feature extraction on the specified target within the supplemented region in the first picture. Then, the electronic device 100 acquires, from the real picture resources in the resource library, a completed resource matching the feature of the specified target based on the feature of the specified target, where the completed resource includes pictures of multiple angles matching the feature of the first picture. For example, the completion resources may include a front angle picture, a left angle picture, and a right angle picture and/or a rear angle picture, and so on. After the completion resource is obtained, the electronic device 100 obtains a three-dimensional stereo picture of the first picture according to the preset three-dimensional model and the completion resource. Then, the electronic device 100 pastes the three-dimensional stereo picture back to the completion area in the first picture.
In some embodiments, when a user uses the electronic device 100 to take a picture, since the resolution of the camera is low, the definition of the real-time preview picture obtained by the camera is low, in order to improve the definition of the taken picture, the electronic device 100 may receive a user operation and use an intelligent completion mode to take a picture, and the electronic device 100 may recognize an area where a designated target is located in a first picture taken by the camera and extract features of the designated target. The electronic device 100 finds the completion resource matched with the feature of the specified target from the resource library based on the feature of the specified target, and pastes the completion resource back to the region where the specified target in the first picture is located, so as to obtain a high-definition picture, and stores the high-definition picture in the picture library.
For example, as shown in fig. 11A, electronic device 100 may receive an input operation (e.g., a single click) by a user on camera application icon 330, in response to which electronic device 100 may display user interface 640 as shown in fig. 11A.
The user interface 640 shows a live preview screen captured by some of the function icons, controls, and cameras. For example, the user interface 40 further includes a real-time preview screen 650 captured by a camera, such as a large aperture icon 401, a night view icon 402, a portrait icon 403, a photograph icon 404, a record icon 405, a professional icon 406, a smart completion icon 407, more icons 408, a floodlight icon 409, a filter icon 410, a setting icon 411, a playback control 412, a shooting control 413, and a pre-and post-switching camera control 414.
The electronic device 100 may receive an input operation (e.g., a single click) by the user on the smart completing icon 407, and in response to the input operation, as shown in fig. 11B, the electronic device 100 may adjust the currently selected general photographing mode from the photographing mode to the smart completing photographing mode.
After the electronic device 100 receives the user operation to adjust the normal shooting mode from the shooting mode to the smart-patch shooting mode, the electronic device 100 may receive an input operation (e.g., a single click) of the user with respect to the shooting control 413 in the user interface 640, and in response to the input, the electronic device 100 acquires an original image captured by the camera, where the original image is a first picture.
The electronic device 100 identifies the designated target in the first picture and the area where the designated target is located in the first picture according to a preset model, and performs feature extraction on the designated target. The electronic device 100 acquires the completion resource matched with the feature of the specified target from the real picture resource in the resource library based on the feature of the specified target, and posts the completion resource back to the area where the specified target is located in the first picture.
Specifically, please refer to the above embodiments, and this application will not be described in detail herein, how the electronic device 100 obtains the matched completion resource from the real picture resource in the resource library according to the feature of the specified target in the first picture.
In a possible implementation manner, after the completing resource is pasted back to the area where the designated target of the first picture is located, the electronic device 100 may directly store the picture after the intelligent completing to the gallery. The back-display control 412 can receive an input operation (e.g., a single click) by the user, and in response to the input operation by the user, the electronic device 100 will display the photo browsing interface 70 shown in fig. 8H. Specifically, please refer to the embodiment shown in fig. 8H, which is not described herein again.
In another possible implementation manner, after the completing resource is pasted back to the area where the specified target of the first picture is located by the electronic device 100, the electronic device 100 simultaneously displays the original image acquired by the camera and the image obtained after the intelligent completing of the original image on the user interface, and the user may select to store one or more pictures displayed in the user interface. Specifically, please refer to the embodiment shown in fig. 8I, which is not described herein again.
In some embodiments, electronic device 100 may receive an input operation (e.g., a single click) by a user on camera application icon 330, in response to which, as shown in fig. 11C, user interface 640 displayed by electronic device 100 may also include a three-dimensional completion control 9000. For the description of other controls in the user interface 640, please refer to the description in fig. 11A, which is not described herein again.
The electronic device 100 may receive an input operation (e.g., a single click) by the user on the smart completing icon 407, and in response to the input operation, as shown in fig. 11B, the electronic device 100 may adjust the currently selected general photographing mode from the photographing mode to the smart completing photographing mode.
After the electronic device 100 receives a user operation to adjust the normal shooting mode from the shooting mode to the intelligent completion shooting mode, after the shooting control 413 in the user interface 640 receives an input operation (e.g., clicking) of the user, the electronic device 100 completes an image acquired by the camera by default using a two-dimensional completion algorithm. When the user wants to complement the image captured by the camera using the three-dimensional complementing algorithm, as shown in fig. 11C, the three-dimensional complementing control 9000 in the user interface 640 may receive an input operation (e.g., a click) by the user, and in response to the input operation (e.g., a click) by the user, after receiving the input operation (e.g., a click) by the user of the photographing control 413 in the user interface 640, the electronic device 100 complements the image captured by the camera using the three-complementing algorithm.
In other embodiments, the electronic device may receive a single click operation of the three-dimensional completion control 9000 in the user interface 640 by the user, and the electronic device 100 switches the two-dimensional smart completion mode to the three-dimensional smart completion mode. Thereafter, the electronic apparatus 100 may identify the supplementary area according to the sliding trajectory of the user. Thereafter, the shooting control 413 in the user interface 640 may receive an input operation (e.g., a single click) from the user, and the electronic device 100 acquires an original image captured by the camera, where the original image is a first picture. The electronic device 100 will perform feature extraction on the specified target within the completion area. Then, the electronic device 100 acquires, from the real picture resources in the resource library, a completed resource matching the feature of the specified target based on the feature of the specified target, where the completed resource includes pictures of multiple angles matching the feature of the first picture. For example, the completion resources may include a front angle picture, a left angle picture, and a right angle picture and/or a rear angle picture, and so on. After the completion resource is obtained, the electronic device 100 obtains a three-dimensional stereoscopic picture of the first picture according to the preset three-dimensional model and the completion resource. Then, the electronic device 100 pastes the three-dimensional stereo picture back to the completion area in the first picture.
In other embodiments, multiple pictures may be stored in the file management of the electronic device 100. Some pictures in file management are low in sharpness. The electronic device 100 may receive an operation of a user to start the smart replenishment function. The electronic device 100 identifies the area of the designated target and the characteristics of the designated target in the first picture, finds the completion resource matched with the characteristics of the designated target in the resource library according to the characteristics of the designated target, pastes the completion resource back to the area of the designated target in the first picture to obtain the high-definition picture, and stores the high-definition picture in file management.
As shown in fig. 12A, fig. 12A exemplarily shows a user interface 30 of an electronic device 100, where the user interface 30 may include icons of some application programs, and for the description of the user interface 30, reference is made to the embodiment shown in fig. 8A, which is not repeated herein.
Illustratively, as shown in fig. 12A, the file management icon 317 may receive a user click operation, and in response to the user click operation, the electronic device 100 displays the user interface 120 shown in fig. 12B. The user interface 120 may include some functional icons such as an icon 1201 of a picture, an icon 1202 of a video, an icon 1203 of a document, an icon 1204 of an audio, an icon 1205 of a my cell phone, an icon 1206 of a my cloud disk, an icon 1206 of a network neighbor, and an icon 1207 of a most recent deletion.
As shown in fig. 12B, the icon 1201 of the picture may receive a user click operation, and in response to the user click operation, the electronic device 100 displays the user interface 130 shown in fig. 12C.
User interface 130 includes picture 1301 and function options 1302. The functionality options 1302 may include a share control 1303, a favorites control 1304, a picture completion control 1305, a delete control 1306, and a more control 1307, among others. Picture 1301 is the first picture.
When the user thinks that the definition of the picture 1301 is low and the picture 1301 needs to be beautified, the electronic device 100 may receive a click operation of the user on the picture completion control 1305, respond to the click operation of the user, the electronic device 100 identifies the type and the feature of the specified target in the picture 1301, extracts completion resources matched with the feature of the specified target in the picture 1301 from a resource library based on the feature of the specified target in the picture 1301, and pastes the completion resources back to the area where the specified target is located in the picture 1301, so as to obtain the user interface 140 shown in fig. 12D.
User interface 140 includes picture 1401 and function options 1302. The picture 1401 is a high-definition picture after the electronic device 100 completes the designated target in the picture 1301 according to the completion resource.
In other embodiments, because the first picture has a poor shooting effect, for example, a person squints, red eyes, legs are short, and the like, the electronic device 100 may identify a region in the first picture that needs to be completed, for example, a region in the first picture where the eyes of the person are located, according to a sliding track of the user on the first picture. Then, the electronic device 100 extracts the feature of the specified target in the completion area, acquires the completion resource matched with the feature of the specified target from the resource library based on the feature of the specified target, and pastes the completion resource back to the area where the specified target in the first picture is located to beautify the first picture. Specifically, please refer to the embodiments shown in fig. 10E-10I, which are not repeated herein.
Fig. 12A-12D exemplarily show that, when the electronic device 100 needs to complete a picture in file management, the picture completion control 1305 in the user interface 130 may receive an input operation (e.g., single click) by the user, and in response to the input operation (e.g., single click) by the user, the electronic device 100 will complete a specified target within a completion area in the picture in file management in a two-dimensional smart completion mode.
When the user wants to complement the first picture using three-dimensional complementation, as shown in fig. 12E, the electronic device 100 may receive an input operation (e.g., a single click) by the user to the more control 1307 in the user interface 130, and the electronic device 100 displays the interface 150 shown in fig. 12F. Interface 150 includes a three-dimensional smart completion mode icon 1501 that is closed.
As shown in fig. 12F, the three-dimensional smart completion mode icon 1501 may receive an input operation (e.g., a single click) by the user, and in response to the input operation (e.g., the single click) by the user, the electronic device 100 switches the two-dimensional smart completion mode to the three-dimensional smart completion mode. Thereafter, the picture completion control 1305 in the user interface 130 may receive an input operation (e.g., a single click) of the user, and in response to the input operation (e.g., the single click) of the user, the electronic device 100 completes the specified target in the completion area in the picture in the file management according to the two-dimensional smart completion mode. The electronic device 100 will perform feature extraction on the specified target within the supplemented region in the first picture. Then, the electronic device 100 acquires, from the real picture resources in the resource library, a completed resource matching the feature of the specified target based on the feature of the specified target, where the completed resource includes pictures of multiple angles matching the feature of the first picture. For example, the completion resources may include a front angle picture, a left angle picture, and a right angle picture and/or a rear angle picture, and so on. After the completion resource is obtained, the electronic device 100 obtains a three-dimensional stereo picture of the first picture according to the preset three-dimensional model and the completion resource. Then, the electronic device 100 pastes the three-dimensional stereo picture back to the completion area in the first picture to obtain a three-dimensional completed picture.
In other embodiments, the first picture may be a picture in the internet. The electronic device 100 searches for a picture (for example, a first picture) from the internet according to the search application, but the resolution of the picture is low, and the electronic device 100 can receive an operation of a user to start the smart completion function. The electronic device 100 identifies the area of the specified target and the feature of the specified target in the first picture, finds the completion resource matched with the feature of the specified target in the resource library according to the feature of the specified target, and posts the completion resource back to the area of the specified target in the first picture to obtain the high-definition picture, and then the user can select to store the high-definition picture in the picture library.
As shown in fig. 13A, fig. 13A exemplarily shows the user interface 30 of the electronic device 100, where the user interface 30 may include icons of some application programs, the user interface 30 includes an icon 334 of a search application, and please refer to the embodiment shown in fig. 8A for description of icons of other application programs in the user interface 30, which is not described herein again.
For example, as shown in fig. 13A, the icon 334 of the search application may receive a user click operation, and in response to the user click operation, the electronic device 100 displays a user interface 1600 as shown in fig. 13B. User interface 1600 is a search user interface for a search application. User interface 1600 may include a search box 1601 and a search control 1602. The electronic device 100 may receive an input operation (e.g., "picture") of the user in the search box 1601, and thereafter, the electronic device 100 may receive a click operation of the search control 1602 by the user in response to the click operation by the user. The electronic device 100 will search for "pictures" in the server of the search application.
As shown in fig. 13C, after the electronic device 100 receives an input operation (e.g., "picture") by the user in the search box 1601, the electronic device 100 may receive a click operation of the search control 1602 by the user, and in response to the click operation by the user, the electronic device 100 displays a picture browsing interface 1610 shown in fig. 13C.
Picture browsing interface 1610 includes pictures 1603 and functionality controls. The functionality controls may include a share control 1604, a download control 1605, a favorites control 1606, a picture completion control 1607, and a more control 1608, among others. Picture 1603 is the first picture.
When the user thinks that the definition of the picture 1603 is low and the picture 1603 needs to be beautified, the electronic device 100 may receive a click operation of the user on the picture completion control 1607, respond to the click operation of the user, the electronic device 100 identifies the type and the specified target feature of the specified target in the picture 1603, extracts completion resources matched with the feature of the specified target in the picture 1603 from the resource library based on the feature of the specified target in the picture 1603, and pastes the completion resources back to the area where the specified target in the picture 1603 is located, so as to obtain a user interface 1620 shown in fig. 13D.
The user interface 1620 includes a picture 1609 and a functionality control. The picture 1609 is a high-definition picture after the electronic device 100 completes the designated target in the picture 1603 according to the completion resource.
Thereafter, the electronic device 100 may receive a user click operation on the download control 1605 in the user interface 1620, and in response to the user click operation, the electronic device 100 saves the picture 1609 to a specified location (e.g., in the gallery) in the electronic device 100.
In other embodiments, because the first picture has a poor shooting effect, for example, a person squints his eyes, red eyes, legs are short, and the like, the electronic device 100 may identify an area in the first picture that needs to be completed, for example, an area where the person's eyes are located in the first picture, according to a sliding track of the user on the first picture. Then, the electronic device 100 extracts the feature of the specified target in the completion area, acquires the completion resource matched with the feature of the specified target from the resource library based on the feature of the specified target, and pastes the completion resource back to the area where the specified target in the first picture is located to beautify the first picture. Specifically, please refer to the embodiments shown in fig. 10E-10I, which are not repeated herein.
Fig. 13A-13D exemplarily show that when the electronic device 100 needs to complete a picture in the internet, the picture completion control 1607 in the user interface 1610 can receive an input operation (e.g., a single click) from the user, and in response to the input operation (e.g., the single click) from the user, the electronic device 100 completes a specified target in a completion area in the picture in file management in a two-dimensional smart completion mode.
When the user wants to complement the first picture using three-dimensional complementing, as shown in fig. 13E, the electronic device 100 may receive an input operation (e.g., a single click) from the user to the more control 1608 in the user interface 1610, and the electronic device 100 displays an interface 1630 as shown in fig. 13F. Interface 1630 includes a three-dimensional smart completion mode icon 1611, which closes the three-dimensional smart completion mode.
As shown in fig. 13F, the three-dimensional smart completion mode icon 1611 may receive an input operation (e.g., a single click) by the user, and in response to the input operation (e.g., the single click) by the user, the electronic device 100 switches the two-dimensional smart completion mode to the three-dimensional smart completion mode. Thereafter, the picture completion control 1607 in the user interface 1610 can receive an input operation (e.g., a single click) from the user, and in response to the input operation (e.g., the single click) from the user, the electronic device 100 completes the specified target in the completion area in the picture in the file management according to the two-dimensional smart completion mode. The electronic device 100 will perform feature extraction on the specified target within the supplemented region in the first picture. Then, the electronic device 100 acquires, from the real picture resources in the resource library, a completed resource matching the feature of the specified target based on the feature of the specified target, where the completed resource includes pictures of multiple angles matching the feature of the first picture. For example, the completion resources may include a front angle picture, a left angle picture, and a right angle picture and/or a rear angle picture, and so on. After the completion resource is obtained, the electronic device 100 obtains a three-dimensional stereo picture of the first picture according to the preset three-dimensional model and the completion resource. Then, the electronic device 100 posts the three-dimensional stereoscopic picture back to the completion area in the first picture to obtain a three-dimensional completed picture, and then, the electronic device 100 may receive a click operation of the user on the download control 1605 in the user interface 1620, and in response to the click operation of the user, the electronic device 100 stores the three-dimensional completed picture in a designated position (for example, in a gallery) in the electronic device 100.
The following describes a picture completion method provided in an embodiment of the present application.
Fig. 14 schematically illustrates a flowchart of an image completion method provided in an embodiment of the present application.
As shown in fig. 14, the method may include the steps of:
s1401, the electronic apparatus 100 (first electronic apparatus) acquires a first picture.
The first picture may be a picture acquired by a camera of the electronic device 100 in real time, or a picture in a gallery of the electronic device 100, or a picture in file management, or a picture sent to the electronic device 100 by another electronic device, or a picture in the internet, or the like.
If the first picture is a picture captured by a camera of the electronic device 100 in real time, before the electronic device 100 obtains the first picture captured by the camera, the electronic device 100 receives a first operation of the user, where the first operation (the fourth operation) may be an input operation (e.g., a single click) for the shooting control 413 in the user interface 40 shown in fig. 8G. The first picture may be the original image 550 captured by the camera shown in fig. 8I.
If the first picture is a picture in a gallery of the electronic device 100 or a picture sent to the electronic device 100 by another electronic device, before the electronic device 100 acquires the first picture, the electronic device 100 receives a first operation (a fifth operation) of the user, where the first operation (a fifth operation) may be an input operation (e.g., a click) for an animal thumbnail icon (a thumbnail of the first picture) in the second user interface as shown in fig. 10B, and in response to the first operation of the user, the electronic device 100 displays the first picture, where the first picture may also be the first picture 6201 shown in fig. 10C, and the first picture may also be the picture 1000 shown in fig. 10F.
If the first picture is a picture in file management, before the electronic device 100 acquires the first picture, the electronic device 100 receives a first operation of the user, where the first operation may be an input operation (e.g., clicking) on an icon 1201 in the user interface 120 shown in fig. 12B. The first picture may be the first picture 1301 shown in fig. 12C.
If the first picture is a picture in the internet, before the electronic device 100 acquires the first picture captured by the camera, the electronic device 100 receives a first operation of the user, which may be a first operation on the search icon 1602 (e.g., single click) as shown in fig. 13B, and in response to the first operation of the user, the electronic device 100 displays a first picture, which may be the first picture 1603 shown in fig. 13C.
S1402, the electronic device 100 identifies a complementary region (first region) of the first picture.
If the first picture is a picture acquired by the camera of the electronic device 100 in real time, after the electronic device 100 acquires the first picture, the electronic device 100 further needs to receive a second operation of the user, where the second operation (the third operation) may be an input operation (e.g., a click) for the intelligent completion icon 410 in the user interface 40 shown in fig. 8B, and the second operation may also be an input operation (e.g., a click) for the intelligent completion control 407 shown in fig. 11A.
If the first picture is a picture in the gallery of the electronic device 100 or a picture sent to the electronic device 100 by another electronic device, the second operation may be an input operation (e.g., a click) to the picture completion control 6206 shown in fig. 10C, and the second operation (the sixth operation) may be an input operation (e.g., a click) to the picture completion control 6206 shown in fig. 10H.
If the first picture is a picture in file management of the electronic device 100, the second operation may also be an input operation (e.g., a single click) to the picture completion control 1305 shown in fig. 12C.
If the first picture is a picture in the internet, the second operation may also be an input operation (e.g., a single click) to the picture completion control 1607 shown in fig. 13C.
In response to the second operation of the user, the electronic device 100 determines a completion area in the first picture, and completes the completion area in the first picture according to the completion resource (the first completion resource).
The electronic device 100 is pre-provisioned with recognition models or algorithms for different designated targets. The electronic device 100 can recognize the designated object and the area where the designated object is located in the first picture through the recognition model or algorithm of the designated objects. The categories of designated objects include, but are not limited to, people, animals, trees, houses, flowers, moon, bags, cars, bowls, and the like.
After the electronic device 100 receives the user operation and opens the intelligent completion function, the electronic device 100 defaults to use the areas where all the designated targets identified by the electronic device 100 in the first picture are located as the completion areas.
In an alternative implementation manner, the electronic device 100 may receive the operation setting of the user to only complement the designated target with the highest priority level of the designated target class identified by the electronic device 100 in all the designated targets. Specifically, please refer to the embodiments shown in fig. 8K to 8M, which are not repeated herein.
Illustratively, the category priorities of the designated objects set in the electronic device 100 are sorted from high to low into characters, animals, trees, houses, moon, bags, and the like. The category of the designated object identified by the electronic device 100 in the first picture includes a person, a tree, a moon, and so on. Since the priority of the person is the highest, when there are multiple persons in the first picture, the electronic device 100 only completes the areas where all the persons in the first picture are located. Alternatively, when there are a plurality of persons in the first picture, the electronic apparatus 100 calculates the area occupied by each person in the first picture, and the electronic apparatus 100 completes only the person having the largest area.
In another alternative implementation, the electronic device 100 may receive a selection operation (first operation) of the first region on the first picture by the user, and determine the complementary region according to the selection operation. The selection operation may be a specific click or slide operation, please refer to the embodiments shown in fig. 8N to 8Q, or fig. 10F to 10I, which is not described herein again.
It should be noted that the electronic device 100 may receive a sliding operation of the user on the first picture, and determine that the priority of the complete area is the highest according to the sliding track.
Specifically, when the electronic device 100 defaults to the area where all the designated targets identified by the electronic device 100 in the first picture are located as the completion area. For example, the electronic device 100 recognizes a face, a tree, and a moon in the first picture, and the electronic device 100 completes a region where the face is located, a region where the tree is located, and a region where the moon is located according to the completion resource in the resource library. But the user only wants to complement the area of the first picture where the moon is located. Therefore, the electronic apparatus 100 may receive a sliding operation of the user on the first picture, and determine an area where the moon is located according to the sliding trajectory. Since the priority of the complementing area is highest according to the sliding track. Therefore, the electronic device 100 only takes the area where the moon is located in the first picture according to the user sliding trajectory as the completion area, and completes the specified target (moon) in the completion area identified by the sliding trajectory according to the completion resource in the resource library.
Or when the electronic device 100 receives the operation setting of the user, only the designated object with the highest category priority level among all the designated objects identified by the electronic device 100 is complemented. Illustratively, the electronic device 100 recognizes the face, the tree and the moon in the first picture, and since the category priority of the face is the highest, the electronic device 100 only completes the region where the face is located in the first picture. But the user wants to complement the area of the first picture where the moon is located. Therefore, the electronic apparatus 100 may receive a sliding operation of the user on the first picture, and determine an area where the moon is located according to the sliding trajectory. Since the priority of the complementing area is highest according to the sliding track. Therefore, the electronic device 100 only takes the area where the moon is located in the first picture according to the user sliding trajectory as the completion area, and completes the specified target (moon) in the completion area identified by the sliding trajectory according to the completion resource in the resource library.
The electronic device 100 performs feature extraction on the specified target in the completion area to obtain a feature of the specified target, and a feature value of the specified target may be represented by a feature vector. The feature vector may represent color features, texture features, contour features, and other features of the target.
And S1403, the electronic device 100 acquires the first completion resource from the resource library according to the characteristics of the specified target.
In an optional implementation manner, before the electronic device acquires the first picture, the electronic device may establish a resource library according to the pre-selected picture resources, and periodically (for example, one week) update the resource library, where the resource library contains high-definition pictures of various kinds of specified targets. After the electronic equipment acquires the first picture, the electronic equipment matches the completion resource from the resource library.
First, how the electronic device establishes a resource library according to the pre-selected picture resources is described.
The method comprises the following steps: the electronic equipment acquires a preselected picture resource, wherein the preselected picture resource comprises pictures of various categories.
The preselected picture resource may be a plurality of pictures stored locally by the electronic device 100, a plurality of pictures stored in a cloud server, a plurality of pictures in the internet, or a plurality of pictures that can be acquired by other electronic devices that establish communication connection with the electronic device 100.
Specifically, when the preselected picture resource is a plurality of locally stored pictures, in order to ensure timeliness of the completed resource, the electronic device 100 may screen the plurality of locally stored pictures, and delete the pictures whose difference between the creation date of the plurality of locally stored pictures and the specified date is greater than the preset value.
When the preselected picture resource is a picture stored in the cloud server, the electronic device 100 establishes a communication connection with the cloud server, and the cloud server sends the preselected picture resource to the electronic device 100.
When the pre-selected picture resource is a picture saved in another electronic device (e.g., the electronic device 200) that establishes a communication connection with the electronic device 100 by bluetooth or the like. The electronic device 100 establishes a communication connection with the electronic device 200, the electronic device 200 transmits a picture stored in the electronic device 200 to the electronic device 100, or the electronic device 200 transmits a cloud picture of the electronic device 200 to the electronic device 100, or the electronic device 100 establishes a communication connection with the internet through wireless communication technologies such as a 4G network, a 5G network, or a Wireless Local Area Network (WLAN), a search engine transmits a picture in a database to the electronic device 200, and the electronic device 200 transmits a picture to the electronic device 100.
When the preselected picture resource is a picture in the internet, that is, the electronic device 100 establishes a communication connection with the internet through a wireless communication technology such as a 4G network, a 5G network, or a Wireless Local Area Network (WLAN). The search engine transmits the pictures in the database to the electronic device 100.
And step two, the electronic equipment can determine the completion resources from the preselected picture resources in any one of the following modes to establish a resource library.
The first method is as follows: the electronic device obtains image parameters of a preselected picture resource, the image parameters including one or more of exposure, sharpness, color values, quality values, noise values, anti-shake values, flash values, and artifact values. The electronic equipment calculates the quality score of the preselected picture resources according to the image parameters of the preselected picture resources, determines the completion resources according to the quality score of the preselected picture resources, and stores the completion resources into the resource library.
The electronic device determines a quality score for the completion resource based on one or more image parameters of the exposure value, the sharpness value, the color value, the quality perception value, the noise value, the anti-shake value, the focus value, the artifact value, and the like.
In a possible implementation manner, the intelligent recommendation module takes an average value of the multiple dimension values as a quality score of the completion resource. And the intelligent recommendation module takes the picture with the highest quality score in the completion resources as the final first completion resource to complete the completion area of the first picture.
In another possible implementation manner, the intelligent recommendation module may also score the completion resources according to the different dimension criteria to obtain scores of the completion resources in the different dimensions. The electronic device 100 obtains a completion resource with the highest preset dimension quality score. In this way, the electronic device 100 can obtain a completion resource with the highest quality score of the preset dimension, and the diversity of picture completion is improved.
When the preset dimension is multiple, the electronic device 100 may obtain multiple first completion resources, and in an optional implementation manner, the electronic device 100 replaces the images in the completion area in the first picture with the multiple first completion resources respectively, and sticks the multiple first completion resources back to the area in the first picture where the specified target is located, so as to obtain multiple high-definition images respectively. The user may choose to save any one or more pictures of the plurality of high definition images. In another optional implementation manner, the intelligent recommendation module displays a plurality of first completion resources on a user interface of the electronic device 100, and the electronic device 100 may receive a selection operation of a user to select one or more first completion resources in the user interface to complete the completion area in the first picture.
Specifically, please refer to the embodiments shown in fig. 4-5, which are not repeated herein.
The second method comprises the following steps: subjective evaluation of image quality
The subjective evaluation of image quality is divided into an absolute evaluation criterion of picture quality and a relative evaluation criterion of picture quality.
The absolute evaluation standard of the user on the image quality is that the user evaluates the quality of an image to be determined according to the quality of an original image according to own knowledge and understanding, wherein the quality of the original image is standard quality, namely, the user can not see the quality of the image. Specifically, the picture to be evaluated and the original picture are alternately displayed to a user for observation according to a certain rule, then the user scores the quality of the picture to be evaluated within a certain time after the picture is displayed, and the average value of a plurality of scores given by the user is used as the quality score of the picture to be evaluated.
The relative evaluation standard of the user on the picture quality is that the user performs mutual evaluation on a batch of pictures to be evaluated, the batch of pictures are sorted from high to low according to the quality of the pictures, and the picture quality scores are given. The relative evaluation criteria of the user on the picture quality adopts a Single Stimulus Continuous Quality Evaluation (SSCQE). The specific method comprises the following steps: and displaying a batch of pictures to be evaluated to a user according to a certain sequence for watching, and scoring the watched pictures when the user watches the batch of pictures. Specifically, please refer to the absolute evaluation criteria of picture quality described in table 1, the relative evaluation criteria of picture quality described in table 2, and the above embodiments, which are not repeated herein.
In a specific implementation, a user can score the quality of a batch of pictures to be evaluated according to an absolute evaluation criterion or a relative evaluation criterion to obtain one or more pictures considered by the user to be the best in quality, and the electronic device 100 receives a user operation and stores the one or more pictures in a resource library. The electronic device 100 will complement the first picture according to one or more pictures in the designated location as the first complementing resource. After the intelligent completion function is started by the camera application, when the first picture collected by the camera needs to be completed, the intelligent recommendation module can complete the first picture collected by the camera according to one or more pictures in the resource library. Therefore, the camera application can complete the first picture collected by the camera according to the selection of the user, and the user experience is improved.
In one possible implementation, the repository may be a designated storage path in a gallery application of the electronic device 100.
As shown in fig. 14A, fig. 14A illustrates a gallery user interface 1300 in the electronic device 100. The gallery user interface 1300 may display a thumbnail including one or more pictures. In particular, gallery user interface 1300 may include an outline of all photos, and the icon "all photos" for which the number of pictures is 2465; the gallery user interface 1300 also includes thumbnails of the people and the icon "people", the number of pictures of the people being 654; the gallery user interface 1300 also includes thumbnails of scenes with a number of pictures of 368 and the icon "scene"; gallery user interface 1300 also includes thumbnail images of animals and icon "animals", the number of pictures of animals being 158; the gallery user interface 1300 also includes thumbnails of animals and the icon "recommendations", with a recommended number of pictures of 158. Namely, the 158 recommended pictures are the completion resources selected by the user. After the camera application starts the intelligent completion function, when the first picture acquired by the camera needs to be completed, the electronic device 100 directly finds the completion resource matched with the feature of the specified target in the low definition specified target from the pictures under the recommended classification, and completes the first picture acquired by the camera according to the completion resource. In this way, the electronic device 100 can complement the first picture collected by the camera according to the selection of the user, and the user experience is improved.
After the electronic equipment acquires the first picture, matching a first completion resource with the characteristic of the specified target with the similarity larger than a preset value from the resource library according to the characteristic of the specified target in the first picture.
In another optional implementation manner, after the electronic device acquires the first picture, the electronic device matches a complementary resource from a pre-selected picture resource (or a resource library).
The preselected picture resource may be a plurality of pictures stored locally by the electronic device 100, a plurality of pictures stored in a cloud server, a plurality of pictures in the internet, or a plurality of pictures that can be acquired by other electronic devices that establish communication connection with the electronic device 100.
Specifically, when the preselected picture resource is a plurality of locally stored pictures, the electronic device 100 finds a complementary resource having a feature similarity greater than a preset value with respect to the specified target from the plurality of locally stored pictures according to the feature of the specified target.
In order to ensure timeliness of the completed resource, the electronic device 100 may screen a plurality of locally stored pictures, and delete the pictures with the difference value between the creation date and the specified date of the plurality of locally stored pictures and being greater than the preset value, so as to obtain a picture set. The electronic device 100 searches for a completion resource in the picture set, where the feature similarity with the specified target is greater than a preset value. Therefore, timeliness of resource completion is guaranteed, and current behavior characteristics of the user are better met.
When the preselected picture resource is a picture stored in the cloud server, the preselected picture resource may be a picture stored in the cloud server of the electronic device 100. The electronic device 100 establishes communication connection with the cloud server, and the intelligent searching module sends the characteristics of the specified target in the first picture to the cloud server. And the cloud server searches the completed resource with the characteristic similarity greater than the preset value with the specified target from the picture stored in the cloud server. And the cloud server sends the completion resources matched with the characteristics of the specified target to the intelligent searching module.
When the pre-selected picture resource is a picture saved in another electronic device (e.g., the electronic device 200) that establishes a communication connection with the electronic device 100 by bluetooth or the like. The electronic device 100 establishes a communication connection with the electronic device 200, and the intelligent search module sends the feature of the specified target in the first picture to the electronic device 200. The electronic device 200 searches the completion resource matched with the feature of the specified target from the picture stored in the electronic device 200, or the electronic device 200 searches the completion resource matched with the feature of the specified target from the cloud picture of the electronic device 200, or the electronic device 100 establishes communication connection with the internet through wireless communication technologies such as a 4G network, a 5G network, or a Wireless Local Area Network (WLAN). The electronic device 200 sends the features of the specified target in the first picture to a search engine (e.g., hundredths degrees). The search engine finds the completed resources matching the characteristics of the specified target from the pictures in the database. The search engine sends the completion resources matched with the characteristics of the specified target to the electronic device 200, and the electronic device 200 sends the completion resources matched with the characteristics of the specified target to the intelligent search module.
When the preselected picture resource is a picture in the internet, that is, the electronic device 100 establishes a communication connection with the internet through a wireless communication technology such as a 4G network, a 5G network, or a Wireless Local Area Network (WLAN). The intelligent lookup module sends the target-specified features in the first picture to a search engine (e.g., hundredths). The search engine finds the completed resources matching the characteristics of the specified target from the pictures in the database. And the search engine sends the completion resources matched with the characteristics of the specified target to the intelligent searching module.
In some embodiments, when the completion resource is a picture, the electronic device directly replaces the image in the completion area with the completion resource and pastes the completion area back to the completion area in the first picture to obtain the high-definition picture.
In other embodiments, when the completed resource is a plurality of pictures, the electronic device needs to perform quality scoring on the completed resource to obtain one or more first completed resources with the highest quality score.
In an alternative implementation manner, the electronic device 100 may use one picture with the highest average score of the multiple dimensional quality scores in the completed resource as the first completed resource.
In another optional implementation manner, the electronic device 100 may obtain multiple first completion resources, that is, the electronic device 100 respectively uses multiple pictures with the highest quality scores of different dimensions in the obtained completion resources as the first completion resources.
The completed resources include a plurality of pictures of the same category, and therefore, the electronic device 100 needs to grade the quality of the plurality of completed resources of the same category.
For example, when the electronic device 100 determines that the designated target is moon, since a gallery of the electronic device 100 stores a plurality of moon high pictures, the electronic device 100 needs to perform quality scoring on the plurality of moon pictures to obtain a moon picture with the highest quality score or a plurality of moon pictures with the highest scores of a plurality of preset dimensions, and the electronic device 100 completes a complete region of the first picture by using the moon picture with the highest quality score as a final first complete resource, or the electronic device 100 completes the complete region of the first picture by using the plurality of moon pictures with the highest scores of the plurality of preset dimensions as the first complete resource to obtain a plurality of completed pictures.
The electronic device 100 may score the quality of the preselected picture resource in any one of the following ways.
Objective evaluation of image quality
The objective evaluation of the image quality is that the intelligent recommending module determines the quality score of the completion resource based on one or more image parameters of an exposure value, a definition value, a color value, a quality-sensing value, a noise value, an anti-shake value, a focusing value, an artifact value and the like.
In a possible implementation manner, the intelligent recommendation module takes an average value of the multiple dimension values as a quality score of the completion resource. And the intelligent recommendation module takes the picture with the highest quality score in the completion resources as the final first completion resource to complete the completion area of the first picture.
In another possible implementation manner, the intelligent recommendation module may also score the completion resources according to the different dimension criteria to obtain scores of the completion resources in the different dimensions. The electronic device 100 obtains a completion resource with the highest preset dimension quality score. In this way, the electronic device 100 uses the one completion resource with the highest preset dimension quality score as the first completion resource, so that the diversity of picture completion is improved. When the preset dimension is multiple, the electronic device 100 may obtain multiple first completion resources, and in an optional implementation manner, the electronic device 100 replaces the images in the completion area in the first picture with the multiple first completion resources respectively, and sticks the multiple first completion resources back to the area in the first picture where the specified target is located, so as to obtain multiple high-definition images respectively. The user may choose to save any one or more pictures of the plurality of high definition images. In another optional implementation manner, the intelligent recommendation module displays a plurality of first completion resources on a user interface of the electronic device 100, and the electronic device 100 may receive a selection operation of a user to select one or more first completion resources in the user interface to complete the completion area in the first picture.
Specifically, please refer to the embodiments shown in fig. 4-5, which are not repeated herein.
In another possible implementation, before the first electronic device modifies the image in the first region according to the first completed resource, the first electronic device displays a first user interface (e.g., a prompt box 900 shown in fig. 9A), the first user interface including the first completed resource; the first electronic device receives a second operation (e.g., a click) for the first complementing resource; in response to the second operation, the first electronic device modifies the image in the first region according to the first completion resource.
S1404, the electronic device 100 replaces the image in the completion area with the first completion resource, and pastes the first completion resource back to the completion area in the first picture to obtain a high definition picture (second picture).
The electronic device 100 replaces the first completion resource with the image in the completion area, and sticks back to the completion area in the first picture to obtain the high-definition picture, where: the electronic device 100 cuts the image in the compensation region in the first picture, replaces the image in the compensation region with the first compensation resource, and then places the first compensation resource in the compensation region in the first picture to obtain the second picture. And the central point of the first completion resource in the second picture is superposed with the central point of the image in the completion area in the first picture before cutting.
The electronic device 100 may also modify the completion area in the first picture according to the first completion resource in other manners to obtain the second picture.
The first method is as follows: the electronic device 100 does not need to crop the image in the supplement region in the first picture, and the electronic device 100 directly covers the first supplement resource on the image in the supplement region to obtain the second picture. Wherein a center point of the first completion resource coincides with a center point of the image in the completion region.
The second method comprises the following steps: the electronic device 100 does not need to crop the image in the first picture in the supplement region, and the electronic device 100 fuses the feature of the first supplement resource and the image feature in the supplement region to obtain the second picture.
When the electronic device 100 receives an operation of a user and starts an intelligent completion function, the electronic device 100 defaults to perform completion on a completion area in the first picture by using a two-dimensional beautification picture algorithm.
First, the electronic apparatus 100 adjusts the size of the designated target in the first completion resource to be consistent with the size of the designated target in the completion area in the first picture. After that, the electronic device 100 adjusts the angle and the depth of the specified target in the first completion resource to be consistent with those of the specified target in the completion area. Specifically, please refer to the content of the intelligent completion image processing module completing the first picture acquired by the camera according to the two-dimensional beautification picture algorithm in the above embodiment, which is not described herein again.
Then, after the electronic device 100 adjusts the specified target in the first completion resource to be consistent with the specified target size, angle and depth in the completion area in the first picture, the electronic device 100 crops the image in the first area, replaces the image in the first area with the adjusted first completion resource, and pastes the adjusted first completion resource back to the completion area in the first picture, so as to obtain the high-definition picture. In the high-definition picture, a center point of a specified target in the first completion resource is overlapped with a center point of the specified target in the completion area in the first picture.
In some embodiments, the electronic device 100 receives a user operation to perform completion on the completion region in the first picture by using a three-dimensional modeling picture algorithm.
First, the electronic device 100 obtains the existing completion resources including the multiple angles of the specified target from a resource library (e.g., a gallery) according to the characteristics of the specified target in the completion area, where the completion resources including the multiple angles of the specified target may be, for example, a front angle completion resource, a left angle completion resource, a right angle completion resource, and/or a rear angle completion resource.
In some embodiments, the intelligent completion image processing module may obtain the pictures at other angles through angle rotation according to the obtained current angle completion resource, that is, the current angle completion resource is subjected to angle rotation around a Z axis to obtain the pictures at other angles, and the Z axis is perpendicular to the horizontal plane.
For example, the intelligent completion image processing module obtains the left angle completion resource according to the front angle completion resource, obtains the right angle completion resource according to the front angle completion resource, and the like. Specifically, please refer to the embodiments shown in fig. 7D-7G, which are not repeated herein.
After the electronic device 100 acquires the completion resources including the multiple angles of the specified target, the electronic device 100 performs modeling according to a preset three-dimensional model (for example, a three-dimensional head model or a three-dimensional house model), so as to obtain a three-dimensional high-definition picture (a first completion resource).
It will be appreciated that the predetermined three-dimensional model is already available, and that different classes of specified objects have different three-dimensional modeling models. Such as a three-dimensional human head model, a three-dimensional house model, and a three-dimensional animal dog model, among others.
The electronic device 100 replaces the image in the completion area with the three-dimensional high-definition picture, and sticks the image back to the completion area in the first picture to obtain the high-definition picture.
When there is only one completion resource, in a possible implementation manner, the electronic device 100 may directly store the high-definition picture obtained after the intelligent completion into the gallery. A user may view the high definition picture in the gallery, specifically please refer to the embodiment shown in fig. 8H, which is not described herein again.
In another possible implementation manner, after the electronic device 100 crops an image in the first area, replaces the image in the first area with the three-dimensional high-definition picture, and then pastes back the three-dimensional high-definition picture to the complement area of the first picture, the electronic device 100 simultaneously displays the original image acquired by the camera and the high-definition picture obtained after intelligently complementing the original image on the user interface, and the user may select to store one or more pictures displayed in the user interface. Specifically, please refer to the embodiment shown in fig. 8I, which is not described herein again.
When there are multiple completion resources, the electronic device 100 completes the first picture with the multiple completion resources respectively to obtain a completed picture corresponding to the multiple completion resources respectively, specifically, please refer to the embodiment shown in fig. 8J, which is not described herein again.
In one possible implementation, after the first electronic device modifies the image in the first area according to the first completion resource to obtain the second picture, the first electronic device displays a fourth user interface (e.g., the picture browsing interface 1300 of fig. 9D), where the fourth user interface includes the first picture (e.g., the original image 1310), the second picture (e.g., the image 1320 after the intelligent completion of the original image), and a save control (e.g., the save control 1330); the first electronic equipment receives a seventh operation aiming at the first picture and/or the second picture; and the first electronic equipment receives and responds to the eighth operation aiming at the saving control, and the first electronic equipment saves the storage path corresponding to the gallery application from the first picture and/or the second picture.
Another picture completion method provided in the embodiment of the present application is described below. As shown in fig. 15, fig. 15 is a flowchart of another picture completing method provided in the embodiment of the present application.
S1501, the electronic device 100 acquires a first picture.
The first picture may be a picture acquired by a camera of the electronic device 100 in real time, or a picture in a gallery of the electronic device 100, or a picture in file management, or a picture sent to the electronic device 100 by another electronic device, or a picture in the internet, or the like.
If the first picture is a picture captured by a camera of the electronic device 100 in real time, before the electronic device 100 obtains the first picture captured by the camera, the electronic device 100 receives an operation one from the user, where the operation one may be an input operation (e.g., clicking) on the shooting control 413 in the user interface 40 shown in fig. 8G. The first picture may be the original image 550 captured by the camera shown in fig. 8I.
If the first picture is a picture in the gallery of the electronic device 100 or a picture sent to the electronic device 100 by another electronic device, before the electronic device 100 acquires the first picture, the electronic device 100 receives a first operation of the user, where the first operation may be an input operation (e.g., a single click) for an animal thumbnail icon as shown in fig. 10B, and in response to the first operation of the user, the electronic device 100 displays the first picture, where the first picture may also be the first picture 6201 shown in fig. 10C, and the first picture may also be the picture 1000 shown in fig. 10F.
If the first picture is a picture in file management, before the electronic device 100 acquires the first picture, the electronic device 100 receives a first operation of the user, where the first operation may be an input operation (e.g., clicking) on an icon 1201 in the user interface 120 shown in fig. 12B. The first picture may be the first picture 1301 shown in fig. 12C.
If the first picture is a picture in the internet, before the electronic device 100 acquires the first picture captured by the camera, the electronic device 100 receives a first operation of the user, which may be a first operation on the search icon 1602 (e.g., single click) as shown in fig. 13B, and in response to the first operation of the user, the electronic device 100 displays a first picture, which may be the first picture 1603 shown in fig. 13C.
S1502, the electronic device 100 determines a complete area of the first picture.
If the first picture may be a picture collected by a camera of the electronic device 100 in real time, the electronic device 100 may receive a sliding operation of the user on the first picture, and determine the full area according to the sliding track. Specifically, please refer to the embodiments shown in fig. 8N-8O, which are not repeated herein.
If the first picture may be a picture in a gallery of the electronic device 100 or a picture in file management or a picture in the internet sent to the electronic device 100 by another electronic device, the electronic device 100 may receive a sliding operation of the user on the first picture, and determine the full area according to the sliding track.
S1503, the electronic device 100 displays a first prompt content, where the first prompt content includes one or more completion resources, and a category of a specified target in the one or more completion resources is different from a category of a specified target in the completion region.
After the electronic device 100 determines the completion area, the electronic device 100 displays first prompt content, where the first prompt content includes one or more completion resources.
It should be noted that the type of the designated target in the one or more completion resources is different from the type of the designated target in the completion area, so that the electronic device 100 can complete the first picture according to the completion resources different from the type of the designated target in the completion area, thereby realizing diversity of beautifying the picture.
For example, if the first picture is a picture captured by the camera of the electronic device 100 in real time, the first prompt content may be the prompt box 900 shown in fig. 9A. The one or more completed resources may be completed resource 1, completed resource 2, completed resource 3, and default completed resource, etc. within the prompt box 900.
The category of the designated target in the completed resource 1, the completed resource 2, and the completed resource 3 is not consistent with the category of the designated target in the completed region. The user can select the completion resource 1, the completion resource 2 and the completion resource 3 to complete the completion area in the first picture. When the user wants to complete the complete area in the first picture through the complete resources with the same characteristics, the user can select the default complete resources and respond to the default complete resources selected by the user. The electronic device 100 matches the completion resource consistent with the specified target feature of the completion region in the first picture from the resource library, and completes the completion region in the first picture according to the completion resource.
If the first picture is a picture in the gallery of the electronic device 100 or a picture in file management or a picture in the internet sent to the electronic device 100 by other electronic devices, the prompt box 900 is displayed on the picture browsing interface. The one or more completed resources may be completed resource 1, completed resource 2, completed resource 3, and default completed resource, etc. within the prompt box 900.
The category of the designated target in the completed resource 1, the completed resource 2, and the completed resource 3 is not consistent with the category of the designated target in the completed region. The user can select the completion resource 1, the completion resource 2 and the completion resource 3 to complete the completion area in the first picture. When the user wants to complete the complete area in the first picture through the complete resources with the same characteristics, the user can select the default complete resources and respond to the default complete resources selected by the user. The electronic device 100 matches the completion resource consistent with the specified target feature of the completion region in the first picture from the resource library, and completes the completion region in the first picture according to the completion resource.
S1504, the electronic device 100 determines the first completed resource.
Illustratively, the selection operation for the first complementing resource may be a selection operation for complementing resource 3 as shown in fig. 9A.
The category of the specified target in the first completion resource does not match the category of the specified target within the completion area in the first picture. Illustratively, the specified target category in the first completed resource is an animal dog and the specified target category in the completed area in the first picture is an animal cat.
In an optional implementation manner, the electronic device 100 does not need to receive a selection operation of the user for a first completed resource in the one or more completed resources, that is, the electronic device 100 uses the multiple completed resources by default to complete the first picture, and obtains the completed pictures corresponding to the multiple completed resources.
S1505, the electronic device 100 posts the first completion resource back to the completion area in the first picture to obtain the high definition picture.
The electronic device 100 crops the picture size of the first completion resource to be the same as the completion region size in the first picture. Then, the electronic device 100 pastes the first completion resource after the size cutting back to the completion area in the first picture, so as to obtain the high-definition picture.
In a possible implementation manner, the electronic device 100 may directly store the high-definition picture obtained after the intelligent completion into the gallery. A user may view the high definition picture in the gallery, specifically please refer to the embodiment shown in fig. 9C, which is not described herein again.
In another possible implementation manner, after the electronic device 100 posts the completion resource back to the completion area of the first picture, the electronic device 100 simultaneously displays the original image acquired by the camera and the picture obtained after intelligently completing the original image on the user interface, and the user may select to store one or more pictures displayed in the user interface. Specifically, please refer to the embodiment shown in fig. 9D, which is not described herein again.
The method can be used for completing the first picture acquired by the camera by using the completion resources with different characteristics, and the diversity of picture completion is realized.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (19)

1. A picture completion method, the method comprising:
the method comprises the steps that first electronic equipment obtains a first picture;
the first electronic equipment determines a first area where a first designated target in the first picture is located;
and the first electronic equipment modifies the image in the first area according to a first completion resource to obtain a second picture, wherein the first completion resource is a picture other than the first picture.
2. The method according to claim 1, wherein before the first electronic device determines the first area of the first picture in which the first designated target is located, the method further comprises:
the first electronic equipment receives a first operation of a user for the first area;
the first electronic device determines a first area where a first designated target in the first picture is located, and the method specifically includes:
and responding to the first operation, and the first electronic equipment determines the first area of the first designated target in the first picture.
3. The method of any of claims 1-2, wherein prior to the first electronic device modifying the image in the first region according to the first completion resource, the method further comprises:
the first electronic device displaying a first user interface, the first user interface including the first completion resource;
the first electronic device receiving a second operation for the first complementing resource;
the modifying, by the first electronic device, the image in the first region according to the first completion resource specifically includes:
in response to the second operation, the first electronic device modifies the image in the first region according to a first completion resource.
4. The method according to claim 1, wherein after the first electronic device determines a first area of the first picture in which the first designated object is located, the method further comprises:
the first electronic device determines the first completion resource with the feature similarity greater than a preset value with the first designated target from a resource library according to the feature of the first designated target, wherein the resource library is any one of the following: the picture locally stored by the first electronic device, the picture locally stored by the second electronic device, and the picture in the server.
5. The method according to claim 4, wherein the first electronic device modifies the image in the first area according to the first completion resource to obtain a second picture, and specifically includes:
the first electronic equipment cuts the image in the first area;
and the first electronic equipment replaces the image in the first area with the first completion resource and pastes the first completion resource back to the first area.
6. The method of claim 1, wherein before the first electronic device obtains the first picture, the method further comprises:
the first electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises a picture, a completion control and a shooting control which are acquired by a camera in real time;
the first electronic device receives a third operation aiming at the completion control and a fourth operation aiming at the shooting control;
the first electronic device obtains the first picture, and specifically includes:
responding to the third operation and the fourth operation, and acquiring the first picture from pictures acquired by a camera in real time by the first electronic equipment.
7. The method of claim 1, wherein prior to the first electronic device obtaining the first picture, the method further comprises:
the first electronic equipment displays a second user interface, and the second user interface comprises a thumbnail of the first picture;
the first electronic device receives a fifth operation of a thumbnail of the first picture;
the first electronic device obtains the first picture, and specifically includes:
responding to the fifth operation, and acquiring the first picture by the first electronic equipment;
after the first electronic device acquires the first picture, the method further comprises:
the first electronic equipment displays a third user interface, and the third user interface comprises the first picture and a completion control;
the first electronic device receiving a sixth operation for the completion control;
the determining, by the first electronic device, a first area in which a first designated target in the first picture is located specifically includes:
in response to the sixth operation, the first electronic device determines the first area of the first designated target in the first picture.
8. The method of any of claims 6-7, wherein the completion control is a two-dimensional completion control or a three-dimensional completion control.
9. The method according to any one of claims 1-8, wherein after the first electronic device modifies the image in the first area according to the first completion resource to obtain a second picture, the method further comprises:
the first electronic device displays a fourth user interface, wherein the fourth user interface comprises the first picture, the second picture and a save control;
the first electronic device receives a seventh operation aiming at the first picture and/or the second picture;
and the first electronic equipment receives and responds to an eighth operation aiming at the saving control, and the first electronic equipment saves the first picture and/or the second picture to a storage path corresponding to the gallery application.
10. An electronic device that is a first electronic device, the first electronic device comprising: one or more processors, one or more memories;
the one or more memories coupled with the one or more processors, the one or more memories to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the first electronic device to perform:
acquiring a first picture;
determining a first area where a first designated target in the first picture is located;
and modifying the image in the first area according to first completion resources to obtain a second picture, wherein the first completion resources are pictures except the first picture.
11. The first electronic device of claim 10, wherein before the first electronic device determines the first region of the first picture in which the first designated target is located, the one or more processors are further configured to invoke the computer instructions to cause the first electronic device to perform:
receiving a first operation of a user for the first area;
and responding to the first operation, and the first electronic equipment determines the first area of the first designated target in the first picture.
12. The first electronic device of any of claims 10-11, wherein prior to the first electronic device modifying the image in the first region according to the first completion resource, the one or more processors are further configured to invoke the computer instructions to cause the first electronic device to perform:
displaying a first user interface, the first user interface including the first completion resource;
receiving a second operation on the first completed resource;
in response to the second operation, modifying the image in the first region according to a first completion resource.
13. The method according to claim 10, wherein after the first electronic device determines the first area of the first image in which the first designated target is located, the one or more processors are specifically configured to invoke the computer instructions to cause the first electronic device to perform:
determining the first completion resource with the feature similarity of the first designated target greater than a preset value from a resource library according to the feature of the first designated target, wherein the resource library is any one of the following: the picture locally stored by the first electronic device, the picture locally stored by the second electronic device, and the picture in the server.
14. The method of claim 13, wherein the one or more processors are specifically configured to invoke the computer instructions to cause the first electronic device to perform:
cropping the image in the first region;
and replacing the image in the first area with the first completion resource, and pasting the first completion resource back into the first area.
15. The first electronic device of claim 1, wherein the first electronic device further comprises a camera; before the first electronic device acquires the first picture, the one or more processors are specifically configured to invoke the computer instructions to cause the first electronic device to perform:
displaying a shooting preview interface, wherein the shooting preview interface comprises a picture acquired by the camera in real time, a completion control and a shooting control;
receiving a third operation aiming at the completion control and a fourth operation aiming at the shooting control;
and responding to the third operation and the fourth operation, and acquiring the first picture from the picture acquired by the camera in real time.
16. The first electronic device of claim 10, wherein prior to the first electronic device obtaining the first picture, the one or more processors are further configured to invoke the computer instructions to cause the first electronic device to perform:
displaying a second user interface, wherein the second user interface comprises a thumbnail of the first picture;
receiving a fifth operation for a thumbnail of the first picture;
responding to the fifth operation, and acquiring the first picture;
after the first electronic device acquires the first picture, the one or more processors are specifically configured to invoke the computer instructions to cause the first electronic device to perform:
displaying a third user interface, the third user interface including the first picture and a completion control;
receiving a sixth operation on the completion control;
and responding to the sixth operation, and determining the first area of the first designated target in the first picture.
17. The first electronic device of any of claims 15-16, wherein the completion control is a two-dimensional completion control or a three-dimensional completion control.
18. The first electronic device of any of claims 10-17, wherein after the first electronic device modifies the image in the first region according to the first completion resource to obtain a second picture, the one or more processors are further configured to invoke the computer instructions to cause the first electronic device to perform:
displaying a fourth user interface, wherein the fourth user interface comprises the first picture, the second picture and a save control;
receiving a seventh operation for the first picture and/or the second picture;
and receiving and responding to an eighth operation aiming at the saving control, and saving the first picture and/or the second picture to a storage path corresponding to the gallery application.
19. A readable storage medium storing computer instructions which, when executed on a first electronic device, cause the first electronic device to perform a picture completion method as claimed in any one of claims 1-9.
CN202110236949.1A 2020-12-31 2021-03-03 Picture completion method and electronic equipment Pending CN114693511A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020116335782 2020-12-31
CN202011633578 2020-12-31

Publications (1)

Publication Number Publication Date
CN114693511A true CN114693511A (en) 2022-07-01

Family

ID=82135960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110236949.1A Pending CN114693511A (en) 2020-12-31 2021-03-03 Picture completion method and electronic equipment

Country Status (1)

Country Link
CN (1) CN114693511A (en)

Similar Documents

Publication Publication Date Title
CN109951633B (en) Method for shooting moon and electronic equipment
CN113132620B (en) Image shooting method and related device
CN112887583B (en) Shooting method and electronic equipment
CN111327814A (en) Image processing method and electronic equipment
CN112262563B (en) Image processing method and electronic device
WO2022042776A1 (en) Photographing method and terminal
WO2021169394A1 (en) Depth-based human body image beautification method and electronic device
WO2020029306A1 (en) Image capture method and electronic device
CN113973173B (en) Image synthesis method and electronic equipment
CN111510626B (en) Image synthesis method and related device
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN112580400B (en) Image optimization method and electronic equipment
US20240153209A1 (en) Object Reconstruction Method and Related Device
CN112840635A (en) Intelligent photographing method, system and related device
WO2021115483A1 (en) Image processing method and related apparatus
CN113170037A (en) Method for shooting long exposure image and electronic equipment
CN112712470A (en) Image enhancement method and device
CN115115679A (en) Image registration method and related equipment
CN115686182B (en) Processing method of augmented reality video and electronic equipment
CN113572957B (en) Shooting focusing method and related equipment
CN114693511A (en) Picture completion method and electronic equipment
CN112989092A (en) Image processing method and related device
CN114764745A (en) Image reconstruction method and related device
CN115002333B (en) Image processing method and related device
WO2024046162A1 (en) Image recommendation method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination