CN111210448A - Image processing method - Google Patents
Image processing method Download PDFInfo
- Publication number
- CN111210448A CN111210448A CN202010041015.8A CN202010041015A CN111210448A CN 111210448 A CN111210448 A CN 111210448A CN 202010041015 A CN202010041015 A CN 202010041015A CN 111210448 A CN111210448 A CN 111210448A
- Authority
- CN
- China
- Prior art keywords
- image
- processed
- human
- area
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 11
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 25
- 230000008569 process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The application discloses an image processing method, which comprises the following steps: acquiring an image to be processed; extracting a non-human figure region in an image background, retrieving pictures with set similarity with the non-human figure region from a pre-established picture library or the Internet, and storing all the retrieved pictures; identifying a plurality of human regions and non-human regions in the image to be processed through a convolutional neural network, distinguishing and marking different human regions, and displaying the different human regions to a user; receiving a person region selected by a user, and deleting the selected person region from the image to be processed; and extracting image content in a set area around the deleted to-be-processed image, selecting an area which is most matched with the image content in the set area from all stored pictures, and filling the selected area image into the deleted area of the to-be-processed image to form a processed image. By the application, irrelevant content can be simply and efficiently removed from the photo.
Description
Technical Field
The present application relates to the field of artificial intelligence technology, and in particular, to an image processing method.
Background
At present, with the improvement of living standard of people, the people can select more and more people to go out for traveling. During travel, people hope to record memorable moments by using a camera. However, in general, during the photographing process, contents which are not originally planned may be photographed, for example, in a famous spot, many visitors exist, and it is inevitable that other persons than the target person may be photographed. In this case, it is desirable to process the photographed image, remove portions of an irrelevant person or the like, and complement the background completely.
In response to the above requirement, in the invention patent application with the patent application number of 201711070941.2 entitled "a method for removing unwanted objects from a photo based on a super-pixel voting model", a method for removing unwanted objects from a photo is disclosed. In the scheme, a plurality of photos of the same place and the same angle are shot and compared and synthesized to obtain a photo with a moving object removed. However, this method is highly required for both the photographer and the subject, and requires the photographer to maintain a stable photographing angle, and the subject to be photographed to keep the same posture, and a perfect composite picture can be obtained only by satisfying both requirements.
In addition, in the invention patent application with the application number of 201910015933.0 entitled "method for obtaining a photo of a target person based on a purely scenic background", a method for obtaining a photo containing only the target person by removing irrelevant persons in an image based on a video is also disclosed. According to the scheme, a video is shot, moving objects in the video are removed by adopting a method of comparing front and rear frames, and finally a photo only containing a target person is obtained. However, this solution has problems in that: if the photographer cannot keep the hand stable during photographing, or the photographer moves back and forth, the photographer is recognized as a dynamic object. Therefore, the same action of the photographed person is required to be kept during the photographing process, otherwise, the final photograph is not ideal.
Disclosure of Invention
The application provides an image processing method which can remove irrelevant content in a photo simply and efficiently.
In order to achieve the purpose, the following technical scheme is adopted in the application:
an image processing method comprising:
acquiring an image to be processed;
extracting a non-human figure region in an image background, retrieving pictures with set similarity with the non-human figure region from a pre-established picture library or the Internet, and storing all the retrieved pictures;
identifying a plurality of human regions and non-human regions in the image to be processed through a convolutional neural network, carrying out distinguishing marking on different human regions, and displaying the different human regions to a user;
receiving a person region selected by a user, and deleting the selected person region from the image to be processed;
and extracting image content in a set area around the deleted to-be-processed image, selecting an area which is most matched with the image content in the set area from all stored pictures, and filling the selected area image into the deleted area of the to-be-processed image to form a processed image.
Preferably, after identifying the plurality of human and non-human regions, the method further comprises, before distinguishing and labeling the human regions: and carrying out graying and binarization processing on the image.
Preferably, the different human figure regions are: each successive person area is designated as the same person area and the discrete person areas are designated as different person areas.
Preferably, the pre-established picture library is a 3D panoramic map library according to a global famous scenery spot.
Preferably, the retrieving of the picture having the set similarity degree with the non-human figure region from a pre-established picture library or the internet includes:
and retrieving pictures in the 3D panoramic map library according to the shooting location, the shooting direction and/or the background characteristics of the image to be processed.
Preferably, when the image to be processed is acquired, latitude and longitude coordinate information of a shooting location of the image to be processed is acquired.
An image processing apparatus comprising: the device comprises a retrieval unit, a storage unit, an interface unit and an image restoration unit;
the retrieval unit is used for acquiring an image to be processed, extracting a non-human figure region in an image background, retrieving pictures with set similarity with the non-human figure region from a pre-established picture library or the Internet, and storing all the retrieved pictures in the storage unit;
the image restoration unit is used for identifying a plurality of human regions and non-human regions in the image through a convolutional neural network, distinguishing and marking different human regions, and displaying the different human regions to a user through the interface unit; the image processing device is also used for receiving a person region selected by a user through the interface unit and deleting the selected person region from the image to be processed; and extracting image content in a set area around the deleted image to be processed, selecting an area which is most matched with the image content in the set area from all stored pictures, and filling the selected area image into the deleted area of the image to be processed to form a processed image.
According to the technical scheme, the image to be processed is obtained; extracting a non-human figure region in an image background, retrieving pictures with set similarity with the non-human figure region from a pre-established picture library or the Internet, and storing all the retrieved pictures; identifying a plurality of human regions and non-human regions in the image through a convolutional neural network, distinguishing and marking different human regions, and displaying the different human regions to a user; receiving a person region selected by a user, and deleting the selected person region from the image; in the image, the image content in the set area around the deleted area is extracted, the area which is most matched with the image content in the set area is selected from all stored pictures, the selected area image is filled in the deleted area in the image, and the processed image is formed. By the method, the image content matched with the area to be replaced can be found from the picture library or the network, so that partial area in the original image is replaced, and irrelevant content is simply and efficiently removed from the picture.
Drawings
FIG. 1 is a schematic diagram of a basic flow of an image processing method according to the present application;
fig. 2 is a schematic structural diagram of an image processing apparatus according to the present application.
Detailed Description
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a schematic basic flow chart of an image processing method in the present application, as shown in fig. 1, the method includes:
The acquired image can be uploaded to a processing server by a user terminal or can be processed locally at the terminal.
Step 102, extracting the non-human figure region in the image background, and searching the picture with the set similarity degree with the non-human figure region from the pre-established picture library or the internet.
The similarity degree meeting the requirement is preset, and then pictures meeting the set similarity degree are searched in a picture library or a network.
Specifically, a 3D panorama library may be pre-established as a picture library, and a background picture may be generated using the panorama for comparison with a non-human region. In particular, when searching, the picture can be searched in the 3D panoramic map library according to the shooting location, the shooting direction and/or the background characteristics of the image to be processed. Wherein, the three-dimensional panoramic map can be updated and maintained regularly. In addition, when the image to be processed is acquired in step 101, latitude and longitude coordinate information of a shooting location may be further acquired, so as to facilitate retrieval of similar background photos in the 3D panoramic map library. For example, the user may be prompted to upload latitude and longitude coordinate information to the processing server.
And step 103, storing all the searched pictures.
And 104, identifying a plurality of human regions and non-human regions in the image to be processed through a convolutional neural network, distinguishing and marking different human regions, and displaying the different human regions to a user.
Preferably, after the human figure regions and the non-human figure regions are identified and before the human figure regions are distinguished and marked, the image to be processed is subjected to graying and binarization processing, so that subsequent matching operation is facilitated.
In the case of dividing the human figure region, it is preferable that each of the continuous human figure regions is the same human figure region and the discontinuous human figure regions are different human figure regions. Of course, the person region may be divided according to other rules, which is not limited in the present application. When distinguishing the marks for the person areas, different marks are used for different person areas.
The user selects the person region to be removed and the processor deletes the user-selected region from the image.
And step 106, extracting the image content in the set area around the deleted area from the image after the partial area is deleted, and selecting the area which is most matched with the image content in the set area from all stored pictures.
In the similar pictures retrieved in step 102, pictures matching the periphery of the deletion area are further retrieved. The specific range around the deletion area may be set in advance. The specific matching process may be performed in an existing manner, and will not be described herein again.
And step 107, filling the area image selected in step 106 into the deleted area of the image to be processed to form a processed image.
The specific picture filling method may be performed by using an existing method, and is not described herein again.
The image processing method in the present application ends up so far. Compared with the prior art, the method has the advantages that a user does not need to upload multiple photos or videos, the user does not need to rely on a shooting technology too much, the user only needs to shoot a single photo, and the work of removing the repair can be automatically completed by the image processor. Through the method and the device, irrelevant people in the photo background can be effectively removed, the operation is efficient and easy, and the user experience is greatly improved.
The application also provides an image processing device which can be used for implementing the image processing method. Fig. 2 is a schematic diagram of a basic structure of an image processing apparatus according to the present application. As shown in fig. 2, the apparatus includes: the device comprises a retrieval unit, a storage unit, an interface unit and an image restoration unit.
The retrieval unit is used for acquiring the image to be processed, extracting the non-human figure region in the image background, retrieving pictures with set similarity with the non-human figure region from a pre-established picture library or the Internet, and storing all the retrieved pictures in the storage unit.
The image restoration unit is used for identifying a plurality of human regions and non-human regions in the image through a convolutional neural network, distinguishing and marking different human regions, and displaying the different human regions to a user through the interface unit; the image processing device is also used for receiving the person area selected by the user through the interface unit and deleting the selected person area from the image to be processed; in the image to be processed, extracting the image content in the set area around the deleted area, selecting the area which is most matched with the image content in the set area from all stored pictures, filling the selected area image into the deleted area of the image to be processed, and forming the processed image.
The method and the device overcome the defect that a plurality of continuous pictures or a video is required to be provided in the prior art, provide the method for removing irrelevant elements only by a single picture, effectively avoid human factors caused by shooting, and are more efficient and faster.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (7)
1. An image processing method, comprising:
acquiring an image to be processed;
extracting a non-human figure region in an image background, retrieving pictures with set similarity with the non-human figure region from a pre-established picture library or the Internet, and storing all the retrieved pictures;
identifying a plurality of human regions and non-human regions in the image to be processed through a convolutional neural network, carrying out distinguishing marking on different human regions, and displaying the different human regions to a user;
receiving a person region selected by a user, and deleting the selected person region from the image to be processed;
and extracting image content in a set area around the deleted to-be-processed image, selecting an area which is most matched with the image content in the set area from all stored pictures, and filling the selected area image into the deleted area of the to-be-processed image to form a processed image.
2. The method of claim 1, wherein after identifying the plurality of human and non-human regions, and before distinguishing the human regions, the method further comprises: and carrying out graying and binarization processing on the image.
3. The method of claim 1, wherein the different human figure regions are: each successive person area is designated as the same person area and the discrete person areas are designated as different person areas.
4. The method of claim 1, wherein the pre-established gallery of pictures is a 3D panoramic map gallery from globally known attractions.
5. The method of claim 4, wherein retrieving pictures having a set degree of similarity to the non-human figure region from a pre-established picture library or the internet comprises:
and retrieving pictures in the 3D panoramic map library according to the shooting location, the shooting direction and/or the background characteristics of the image to be processed.
6. The method according to claim 5, wherein latitude and longitude coordinate information of a shooting location of the image to be processed is acquired at the time of acquiring the image to be processed.
7. An image processing apparatus characterized by comprising: the device comprises a retrieval unit, a storage unit, an interface unit and an image restoration unit;
the retrieval unit is used for acquiring an image to be processed, extracting a non-human figure region in an image background, retrieving pictures with set similarity with the non-human figure region from a pre-established picture library or the Internet, and storing all the retrieved pictures in the storage unit;
the image restoration unit is used for identifying a plurality of human regions and non-human regions in the image through a convolutional neural network, distinguishing and marking different human regions, and displaying the different human regions to a user through the interface unit; the image processing device is also used for receiving a person region selected by a user through the interface unit and deleting the selected person region from the image to be processed; and extracting image content in a set area around the deleted image to be processed, selecting an area which is most matched with the image content in the set area from all stored pictures, and filling the selected area image into the deleted area of the image to be processed to form a processed image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010041015.8A CN111210448A (en) | 2020-01-15 | 2020-01-15 | Image processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010041015.8A CN111210448A (en) | 2020-01-15 | 2020-01-15 | Image processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111210448A true CN111210448A (en) | 2020-05-29 |
Family
ID=70786128
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010041015.8A Pending CN111210448A (en) | 2020-01-15 | 2020-01-15 | Image processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111210448A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085688A (en) * | 2020-09-16 | 2020-12-15 | 蒋芳 | Method and system for removing pedestrian shielding during photographing |
CN112613492A (en) * | 2021-01-08 | 2021-04-06 | 哈尔滨师范大学 | Data processing method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107423409A (en) * | 2017-07-28 | 2017-12-01 | 维沃移动通信有限公司 | A kind of image processing method, image processing apparatus and electronic equipment |
CN109769094A (en) * | 2019-02-14 | 2019-05-17 | 任奕洁 | Scenery photo beautification method based on artificial intelligence |
-
2020
- 2020-01-15 CN CN202010041015.8A patent/CN111210448A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107423409A (en) * | 2017-07-28 | 2017-12-01 | 维沃移动通信有限公司 | A kind of image processing method, image processing apparatus and electronic equipment |
CN109769094A (en) * | 2019-02-14 | 2019-05-17 | 任奕洁 | Scenery photo beautification method based on artificial intelligence |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085688A (en) * | 2020-09-16 | 2020-12-15 | 蒋芳 | Method and system for removing pedestrian shielding during photographing |
CN112613492A (en) * | 2021-01-08 | 2021-04-06 | 哈尔滨师范大学 | Data processing method and device |
CN112613492B (en) * | 2021-01-08 | 2022-02-11 | 哈尔滨师范大学 | Data processing method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109741257B (en) | Full-automatic panorama shooting and splicing system and method | |
CN108154518B (en) | Image processing method and device, storage medium and electronic equipment | |
EP3457683B1 (en) | Dynamic generation of image of a scene based on removal of undesired object present in the scene | |
Whyte et al. | Get Out of my Picture! Internet-based Inpainting. | |
EP3324368B1 (en) | Scenario reconstruction method and apparatus, terminal device, and storage medium | |
US20050104976A1 (en) | System and method for applying inference information to digital camera metadata to identify digital picture content | |
CN107197153B (en) | Shooting method and shooting device for photo | |
US9047706B1 (en) | Aligning digital 3D models using synthetic images | |
JP2017531950A (en) | Method and apparatus for constructing a shooting template database and providing shooting recommendation information | |
JP4755156B2 (en) | Image providing apparatus and image providing program | |
CN111210448A (en) | Image processing method | |
CN111951368B (en) | Deep learning method for point cloud, voxel and multi-view fusion | |
WO2016188185A1 (en) | Photo processing method and apparatus | |
WO2017036273A1 (en) | Imaging method and apparatus | |
CN106296574A (en) | 3-d photographs generates method and apparatus | |
CN107704471A (en) | A kind of information processing method and device and file call method and device | |
CN111340889A (en) | Method for automatically acquiring matched image block and point cloud ball based on vehicle-mounted laser scanning | |
RU2669470C1 (en) | Device for removing logos and subtitles from video sequences | |
WO2020168515A1 (en) | Image processing method and apparatus, image capture processing system, and carrier | |
JP4615330B2 (en) | Imaging apparatus and method, importance setting apparatus and method, and program | |
WO2022057773A1 (en) | Image storage method and apparatus, computer device and storage medium | |
US20210124811A1 (en) | Media creation system and method | |
WO2019000427A1 (en) | Image processing method and apparatus, and electronic device | |
Derdaele et al. | Exploring past and present: VR reconstruction of the berlin gendarmenmarkt | |
CN110855875A (en) | Method and device for acquiring background information of image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200529 |