CN112083864A - Method, device and equipment for processing object to be deleted - Google Patents

Method, device and equipment for processing object to be deleted Download PDF

Info

Publication number
CN112083864A
CN112083864A CN202010984875.5A CN202010984875A CN112083864A CN 112083864 A CN112083864 A CN 112083864A CN 202010984875 A CN202010984875 A CN 202010984875A CN 112083864 A CN112083864 A CN 112083864A
Authority
CN
China
Prior art keywords
deleted
frame
photo
deletion
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010984875.5A
Other languages
Chinese (zh)
Inventor
岳丹波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Prize Intelligent Technology Co ltd
Original Assignee
Shenzhen Prize Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Prize Intelligent Technology Co ltd filed Critical Shenzhen Prize Intelligent Technology Co ltd
Priority to CN202010984875.5A priority Critical patent/CN112083864A/en
Publication of CN112083864A publication Critical patent/CN112083864A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method, a device and equipment for processing an object to be deleted, wherein the method comprises the following steps: taking a picture through the double cameras, recording the depth of field information of the shot scene, and storing the shot picture; displaying the selected deletion frame according to the operation and framing the object to be deleted in the photo; and removing the photo data in the deletion frame according to the depth of field information, and filling and repairing the removed area through the environment data. The method provided by the embodiment of the invention can quickly process the object to be deleted, is not limited by time and space, has no requirement on processing skills for a user, and can solve the problem that the processing mode of the object to be deleted in the existing photo is time-consuming and has the requirement on skills. In addition, the invention also discloses a device and equipment for processing the object to be deleted.

Description

Method, device and equipment for processing object to be deleted
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device and equipment for processing an object to be deleted.
Background
A user often takes a picture or a video through a camera of the mobile phone and then shares the picture or the video through corresponding social software; the timeliness is more embodied, and fast sharing is generally required.
If the person who intrudes (such as a person, a car, various small animals and the like) suddenly appears in the shooting frame during the process of shooting or recording the video, or some objects (such as a garbage bin, garbage, a toilet and the like) which affect the beauty cannot be removed from the shooting frame, the person who intrudes and the objects which affect the beauty all belong to the objects to be deleted, the shooting environment effect of the user main body can be affected, and the overall effect is reduced.
At present, the processing mode of the object to be deleted in the photo is usually the post-processing by a third party or computer software, for example, the object to be deleted can only be processed by a computer, PS (image processing software) is needed to remove the object to be deleted, and then the removed part is coated into a corresponding environment by using a tool (such as a copy stamp) in the PS. Although this can achieve the effect, it requires a relatively long processing time and requires the user to grasp the corresponding PS processing skill, and the effect of the processing is related to the processing skill and skill level grasped by the user and may be greatly reduced.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
In view of the above disadvantages of the prior art, the present invention provides a method, an apparatus and a device for processing an object to be deleted, so as to solve the problem that the processing manner of the object to be deleted in the existing photo is time-consuming and skillful.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a method for processing an object to be deleted, which includes the following steps:
taking a picture through the double cameras, recording the depth of field information of the shot scene, and storing the shot picture;
displaying the selected deletion frame according to the operation and framing the object to be deleted in the photo;
and removing the photo data in the deletion frame according to the depth of field information, and filling and repairing the removed area through the environment data.
In the method for processing the object to be deleted, the step of displaying the selected deletion frame and framing the object to be deleted in the photo according to the operation comprises:
when detecting that a photo in the gallery is clicked, displaying a function bar below the photo;
when the edit icon in the function bar is clicked, displaying a delete frame selection bar below the photo;
and framing the object to be deleted by the selected deletion frame according to the user operation and storing the object to be deleted.
In the method for processing the object to be deleted, the deletion box provided in the deletion box selection bar includes: nine-grid pattern, vertical circle and vertical rectangle.
In the method for processing the object to be deleted, after the icon of the nine-grid image is clicked, the image is divided into 9 areas equally through a dividing line, after a certain area is detected to be selected, the size range of the selected area is adjusted according to the dragging operation of a user, and therefore the object to be deleted is completely framed in the area.
In the method for processing the object to be deleted, after the vertical round icon is clicked, a round frame is displayed at the center of the photo, and the position and the size of the round frame are adjusted according to the dragging operation of a user so as to frame all the objects to be deleted in the round frame.
In the method for processing the object to be deleted, after the icon of the vertical rectangle is clicked, a square grid is displayed in the center of the photo, and the position and the size of the square grid are adjusted according to the dragging operation of a user.
In the method for processing the object to be deleted, the step of removing the photo data in the deletion frame according to the depth information includes:
and calculating the distance between each scene and the lens in the picture, separating out a corresponding focal plane according to the distance, finding out the focal plane where the deletion frame is located, and filling the area in the deletion frame into white.
In the method for processing an object to be deleted, the step of filling and repairing the removed area by the environment data includes:
identifying each pixel point on the edge of the white area in the deletion frame, calculating the value of the corresponding pixel point according to the environmental data around each pixel point and filling the edge;
taking each pixel point adjacent to the edge as a new edge, calculating the value of each pixel point on the new edge according to the calculated value of each pixel point on the edge and the environmental data, and filling;
and returning to the step of identifying each pixel point on the edge of the white area in the deletion frame until the pixel points are calculated along each edge and are inwardly shrunk, calculating the value of each pixel point in the whole white area and finishing filling and repairing.
In a second aspect, an embodiment of the present invention provides an apparatus for processing an object to be deleted, including:
the shooting unit is used for shooting through the double cameras, recording the depth of field information of the shot scene and storing the shot pictures;
the selection unit is used for displaying the selected deletion frame according to the operation and framing the object to be deleted in the photo;
and the removing and repairing unit is used for removing the photo data in the deleting frame according to the depth of field information and filling and repairing the removed area through the environment data.
In a third aspect, an embodiment of the present invention also provides an apparatus for processing an object to be deleted, including a memory, and one or more programs, where the one or more programs are stored in the memory, and the one or more programs configured to be executed by the one or more processors include a processor configured to execute the method.
In a fourth aspect, embodiments of the present invention also provide a non-transitory computer-readable storage medium, where instructions of the storage medium, when executed by a processor of a device, enable the device to perform the method.
Compared with the prior art, the method, the device and the equipment for processing the object to be deleted provided by the invention comprise the following steps: recording the depth of field information in the framing photo through the double cameras and storing the shot photo; displaying the selected deletion frame according to the operation and framing the object to be deleted in the photo; and removing the photo data in the deletion frame according to the depth of field information, and performing data filling and repairing on the removed area. The method can quickly process the object to be deleted, is not limited by time and space, and has no requirement on processing skill for users.
Drawings
FIG. 1 is a flow chart of a method for processing an object to be deleted according to the present invention;
FIG. 2 is a schematic diagram of a function bar provided by the present invention;
FIG. 3 is a schematic diagram of a delete box provided by the present invention;
FIG. 4 is a schematic view of a nine-grid system according to the present invention;
FIG. 5 is a schematic view of a vertical circle provided by the present invention;
FIG. 6 is a schematic illustration of focal planes of objects at different positions provided by the present invention;
FIG. 7 is a schematic view of a vertical rectangle provided by the present invention;
FIG. 8 is a diagram illustrating the result of processing an object to be deleted according to the present invention;
fig. 9 is a schematic structural diagram of an apparatus for processing an object to be deleted according to the present invention.
Detailed Description
The invention provides a method, a device and equipment for processing an object to be deleted, and in order to make the purpose, technical scheme and effect of the invention clearer and clearer, the invention is further described in detail by referring to the attached drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Since the hardware configuration of the dual camera (two cameras) has become the standard configuration of the existing mobile terminal (in this embodiment, a mobile phone is taken as an example), the depth displacement map is recorded by using the dual camera virtualization function, and the depth information before and after the picture or the video is recorded. And after the photographing or the video is finished, removing the object to be deleted in the gallery.
Please refer to fig. 1, which is a flowchart illustrating a method for processing an object to be deleted according to the present invention. As shown in fig. 1, the method for processing an object to be deleted includes the following steps:
and S10, taking a picture through the double cameras, recording the depth of field information of the shot scene, and storing the shot picture.
In the step, the double cameras are started during shooting, the double shooting mode is started to record the depth of field information of the scene, and the shot pictures are stored in the gallery. Taking includes taking a photograph and taking a video, where a photograph is taken as an example. The depth information is a depth displacement map.
The video is composed of multi-frame data, and is divided into I-frames, P-frames and B-frames when encoded, taking h.264 video encoding as an example.
An I frame, i.e., an intra-frame coded frame, is an independent frame with all information, and can be independently decoded without referring to other images, and can be simply understood as a static picture.
P frames, i.e., inter-frame predictive coded frames, require reference to a previous I frame for encoding.
The B frame is a bidirectional predictive coding frame, and the difference between the current frame and the previous and subsequent frames is recorded in the B frame.
The I frame only needs to consider the current frame, and the P frame records the difference with the previous frame; the B frame records the difference between the previous frame and the next frame, so that more space can be saved.
Therefore, when the video is processed, the editing page is entered during playback, the data of the I frame is mainly displayed, and a user can remove the data of the object to be deleted in the I frame by referring to the mode of removing the object from the photo. And then the stored data of the video data P frame and B frame reference I frame are recoded, so that the object to be deleted in the whole video can be eliminated.
And S20, displaying the selected deletion frame according to the operation and framing the object to be deleted in the photo.
After the photo is detected to be opened, displaying a deletion frame according to the operation of the user so as to facilitate the user to select an object to be deleted; since the outlines of the objects to be deleted are different, the present embodiment provides 3 kinds of deletion frames, and then the step S20 specifically includes
S21, when the photo in the detection gallery is clicked, a function bar is displayed below the photo.
As shown in fig. 2, the function bar includes a left share icon, a middle edit icon, and a right delete icon. And detecting a removal process of the object to be deleted when the user clicks the editing icon.
S22, when the edit icon in the detection function bar is clicked, a delete box selection bar is displayed below the photograph.
In the embodiment, three types of a nine-grid drawing, a vertical circle and a vertical rectangle are provided for the shape of the deletion box of the object to be deleted, as shown in fig. 3, so that a user can conveniently select the most appropriate deletion box for the object to be deleted with different outlines or positions.
And S23, framing the object to be deleted by the selected deletion frame according to the user operation and storing.
After the icon of the nine-grid image (the leftmost icon in the deletion frame selection bar) is clicked, as shown in fig. 4, the image is divided into 9 areas equally through a dividing line, a certain area (a thickened selection frame is displayed at the edge of the area) is selected, and then the size range of the selected area can be freely dragged, so that the whole object to be deleted is in the area and the range of the area is the minimum. And after the storage is finished, only the frame of the selected area is left on the photo, and other frames disappear.
However, it is possible that the objects to be deleted will not all be in one zone, such as the drum shown in fig. 4, in both zones. At the moment, the object to be deleted can be directly selected by adopting a vertical circle.
As shown in fig. 5, after the icon of the vertical circle is clicked, a circular frame is displayed at the center of the photo, and the user can freely drag the circular frame to move the position of the circular frame and change the size range of the circular frame, such as changing the circular frame into an oval shape, as long as the whole object to be deleted can be enclosed in the circular frame.
As shown in fig. 6, after the vertical rectangular icon is clicked, a square grid is displayed in the center of the photo, and the user can freely drag the square grid to move the position of the square grid and change the range of the length and width area of the square grid.
The user can select the most appropriate deletion box according to the outline of the object to be deleted, and the user can select the corresponding operation according to the upper operation column, such as cancellation, returning to the previous step indicated by the left arrow, returning to the next step indicated by the right arrow, and saving the four operations to keep the staged operation.
And after the selection and the storage, the object to be deleted in the deletion frame can be removed.
And S30, removing the photo data in the deletion frame according to the depth of field information, and filling and repairing the removed area through the environment data.
The method comprises the following steps of removing all photo data (including an object to be deleted and partial environmental data around the object) in a deletion frame, and removing the object to be deleted through depth of field information (namely a depth of field displacement map) in the process of removing the object to be deleted, so that the removed object to be deleted is more accurate; filling the removed deletion frame with white, and then performing data backfill through machine learning, namely performing machine learning by combining environmental data of an adjacent region of the removed region, and performing data filling and repairing on the removed region, so that the backfilled data is more attached to the data of the original image, and the whole photo is more real; and then storing.
The method comprises the steps of removing picture data in a deletion frame according to depth of field information, and adopting a double-shooting principle, namely a left camera and a right camera, wherein when the left camera and the right camera are used for imaging, the left camera is used for shooting a background part, and the right camera is used for shooting a main body (usually a person or an object closest to a lens). The double-shot method is to calculate the distance between an object and a lens according to the principle of human eye triangulation, and specifically, based on the principle of double-shot, the distance between a main shot (i.e., a right camera) and an auxiliary shot (i.e., a left camera) and the distance between a subject and the lens can be calculated according to the focal length of a double-shot module, the distance between the object and the lens at any position in the whole picture, which is the prior art and will not be described in detail herein.
The displacement information of different scenery (main people or objects, objects to be deleted, vehicles, animals and the like) in the whole photo is calculated, and theoretical digital basis can be provided for removing the following objects. As shown in fig. 7, the focal planes and focal lengths of the objects at different positions are different, and the focal length a of the focal plane a, the focal length B of the focal plane B, and the focal length C of the focal plane C are different, so that the images at different focal planes can be removed by the principle of double-shot.
Through the calculation of the statistical data of double shooting, the object displacement information (the distance or the focal length between the object displacement information and the lens) of different focal planes, namely the depth of field displacement image, described in the image 7 is obtained, all data in the deletion frame can be removed according to the depth of field displacement image, namely, the distance between each shot object and the lens in the image is calculated, the corresponding focal plane is separated according to the distance, the focal plane where the deletion frame is located is found, the area in the deletion frame is filled to be white, and the removal of the object to be deleted is completed.
The machine learning is a multi-disciplinary cross specialty, covers probability theory knowledge, statistical knowledge, approximate theory knowledge and complex algorithm knowledge, uses a computer as a tool and is dedicated to a real-time simulation human learning mode, and knowledge structure division is carried out on the existing content to effectively improve learning efficiency.
The intelligent filling in machine learning, i.e., AI (artificial intelligence) intelligent filling, is filled with reference to data information beside. Because the current mobile phone configuration is higher and higher, the operator of AI intelligence is more and more achieved. Filling the removed data, adopting a mean-weighted scheme, performing data backfill and repair on the removed region according to environmental data around the removed region based on machine learning, and specifically including the following steps:
when filling data, one data X to be filled represents the value of one pixel point in the white region, and the data to be filled is calculated by referring to the environmental data (i.e., the values of 8 pixel points corresponding to 8 directions) of 8 directions around (i.e., adjacent to) the data to be filled, i.e., the value of the corresponding pixel point can be obtained; calculating the value of each pixel point on the edge of the white area, wherein the edge is filled and is not white any more after being filled; forming a new edge by the white part adjacent to the edge, and obtaining and filling the values of all the pixel points on the new edge according to the calculated values of all the pixel points on the edge and the environment data; and by analogy, continuously calculating along the edge and inwards shrinking to gradually reduce the size of the white area until the whole white area is filled up finally.
During calculation, the adjacent environment data in 8 directions are divided into two weights, where the environment data located in the 0-degree direction of the data X to be filled is X1 (equivalent to the environment data located on the right side of the data X to be filled is X1), the environment data located in the 90-degree direction of the data X to be filled is X2 (equivalent to the environment data located on the upper side of the data X to be filled is X2), the environment data located in the 180-degree direction of the data X to be filled is X3 (equivalent to the environment data located on the left side of the data X to be filled is X3), the environment data located in the 270-degree direction of the data X to be filled is X4 (equivalent to the environment data located on the lower side of the data X to be filled is X4), and the data weights in the four directions are higher; the environment data located in the 45-degree direction of the data X to be filled is X5 (equivalent to the environment data located in the upper right corner of the data X to be filled is X5), the environment data located in the 135-degree direction of the data X to be filled is X6 (equivalent to the environment data located in the upper left corner of the data X to be filled is X6), the environment data located in the 225-degree direction of the data X to be filled is X7 (equivalent to the environment data located in the lower left corner of the data X to be filled is X7), the environment data located in the 315-degree direction of the data X to be filled is X8 (equivalent to the environment data located in the lower right corner of the data X to be filled is X8), and the data weights in these four directions are low and can be regarded as 1 by default. One data X to be filled can be calculated according to the following formula,
x = [ (X1+ X2+ X3+ X4) × N + (X5+ X6+ X7+ X8) × 1]/(N +1), N ≧ 2. Wherein, N is the data weight of four directions of 0 degree, 90 degree, 180 degree, 270 degree, and the value can be adjusted according to the requirement.
By repeating the above calculation, the values of all the pixel points of the region to be filled can be calculated according to the environment data of the adjacent 8 directions (i.e. the values of the pixel points corresponding to the 8 directions), so that the backfill and repair of the removed region through the environment data are realized, and the data backfill of the removed object region (i.e. the region to be filled, namely the white region in the deletion frame) is completed. It should be understood that, when calculating the inward contraction along the edge, the pixel points on the edge may use the pixel points of the white region in some direction or several directions, and the values of the pixel points of the white region are the values of the environment data corresponding to the white color.
After the treatment is completed, the photograph can be stored to obtain the photograph shown in FIG. 8. And then the user can share the photos and upload the photos to the corresponding social software.
Based on the method for processing an object to be deleted, an embodiment of the present invention further provides an apparatus for processing an object to be deleted, as shown in fig. 9, including:
the shooting unit 10 is used for shooting through the double cameras, recording the depth of field information of the shot scene and storing the shot pictures;
a selection unit 20 for displaying the selected deletion frame and framing the object to be deleted in the photograph according to the operation;
and the removal and repair unit 30 is configured to remove the photo data in the deletion frame according to the depth of field information, and fill and repair the removed area through the environment data.
Those skilled in the art will appreciate that the configuration shown in fig. 9 is a block diagram of only a portion of the configuration relevant to the present application and does not constitute a limitation on the apparatus to which the present application is applied, and that a particular apparatus may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
Based on the method for processing an object to be deleted, an embodiment of the present invention further provides an apparatus for processing an object to be deleted, including a storage, where the storage may be a hard disk or a memory, and the storage stores one or more algorithm processing programs of the method for processing an object to be deleted, and the algorithm processing programs may be configured to be executed by one or more processors, where the one or more algorithm processing programs include instructions for executing the method.
The device can be an electronic device with double cameras, such as a smart phone and a tablet computer. The processor may be a Central Processing Unit (CPU), microprocessor or other data Processing chip in some embodiments, and is configured to execute the program code stored in the memory or process data, for example, execute the method for Processing the object to be deleted.
Embodiments of the present invention also provide a non-transitory computer-readable storage medium, such as a memory, including instructions executable by a processor of a device to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In summary, in the method, the apparatus, and the device for processing an object to be deleted provided by the present invention, the depth of field information in the framing picture is recorded by the dual cameras, and the shot picture is stored; displaying the selected deletion frame according to the operation and framing the object to be deleted in the photo; removing the photo data in the deletion frame according to the depth of field information, and performing data filling and repairing on the removed area; therefore, the object to be deleted in the photographing or video recording process can be removed, the main body for photographing is highlighted, the influence of the object to be deleted on the subject view finding effect is eliminated, the picture quality of the photographing or video recording process is improved, and the effect of the core subject on the scene taking is reflected.
Meanwhile, the user can randomly select the object to be deleted and automatically delete and repair the object only by selecting a proper deletion box, the limitation of time, place and space is avoided, the requirement on the processing skill of the user is avoided, and the shot photos or videos can be quickly shared. And moreover, a mode of combining machine learning and double-shot deep displacement maps is adopted, so that the removal effect is more accurate, the data filling is carried out on the removal area in a mean value weighting mode, the repaired effect and the surrounding can be well fused, and the smoothness of the whole effect is ensured.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by instructing relevant hardware (such as a processor, a controller, etc.) through a computer program, and the program may be stored in a computer readable storage medium, and when executed, the program may include the processes of the above method embodiments. The storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method for processing an object to be deleted is characterized by comprising the following steps:
taking a picture through the double cameras, recording the depth of field information of the shot scene, and storing the shot picture;
displaying the selected deletion frame according to the operation and framing the object to be deleted in the photo;
and removing the photo data in the deletion frame according to the depth of field information, and filling and repairing the removed area through the environment data.
2. The method of claim 1, wherein the step of displaying the selected deletion box and framing the object to be deleted in the photo according to the operation comprises:
when detecting that a photo in the gallery is clicked, displaying a function bar below the photo;
when the edit icon in the function bar is clicked, displaying a delete frame selection bar below the photo;
and framing the object to be deleted by the selected deletion frame according to the user operation and storing the object to be deleted.
3. The method according to claim 2, wherein the deletion box provided in the deletion box selection bar comprises: nine-grid pattern, vertical circle and vertical rectangle.
4. The method as claimed in claim 3, wherein after the icons of the nine-grid image are clicked, the image is divided into 9 regions by dividing lines, and after detecting that a certain region is selected, the size range of the selected region is adjusted according to the drag operation of the user, so as to frame all the objects to be deleted in the region.
5. The method as claimed in claim 3, wherein after the icon of the vertical circle is clicked, a circular frame is displayed at the center of the photo, and the position and size of the circular frame are adjusted according to the drag operation of the user, so as to frame all the objects to be deleted in the circular frame.
6. The method of claim 3, wherein after the icon of the vertical rectangle is clicked, a square grid is displayed at the center of the photo, and the position and size of the square grid are adjusted according to a drag operation of a user.
7. The method of claim 2, wherein the step of removing the photo data in the deletion box according to the depth information comprises:
and calculating the distance between each scene and the lens in the picture, separating out a corresponding focal plane according to the distance, finding out the focal plane where the deletion frame is located, and filling the area in the deletion frame into white.
8. The method of claim 7, wherein the step of filling and repairing the removed area by the environmental data comprises:
identifying each pixel point on the edge of the white area in the deletion frame, calculating the value of the corresponding pixel point according to the environmental data around each pixel point and filling the edge;
taking each pixel point adjacent to the edge as a new edge, calculating the value of each pixel point on the new edge according to the calculated value of each pixel point on the edge and the environmental data, and filling;
and returning to the step of identifying each pixel point on the edge of the white area in the deletion frame until the pixel points are calculated along each edge and are inwardly shrunk, calculating the value of each pixel point in the whole white area and finishing filling and repairing.
9. An apparatus for processing an object to be deleted, comprising:
the shooting unit is used for shooting through the double cameras, recording the depth of field information of the shot scene and storing the shot pictures;
the selection unit is used for displaying the selected deletion frame according to the operation and framing the object to be deleted in the photo;
and the removing and repairing unit is used for removing the photo data in the deleting frame according to the depth of field information and filling and repairing the removed area through the environment data.
10. An apparatus for processing an object to be deleted, the apparatus comprising a memory and one or more programs, wherein the one or more programs are stored in the memory, and wherein the one or more programs configured to be executed by the one or more processors comprise instructions for performing the method of any of claims 1-8.
CN202010984875.5A 2020-09-18 2020-09-18 Method, device and equipment for processing object to be deleted Pending CN112083864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010984875.5A CN112083864A (en) 2020-09-18 2020-09-18 Method, device and equipment for processing object to be deleted

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010984875.5A CN112083864A (en) 2020-09-18 2020-09-18 Method, device and equipment for processing object to be deleted

Publications (1)

Publication Number Publication Date
CN112083864A true CN112083864A (en) 2020-12-15

Family

ID=73738034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010984875.5A Pending CN112083864A (en) 2020-09-18 2020-09-18 Method, device and equipment for processing object to be deleted

Country Status (1)

Country Link
CN (1) CN112083864A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667296A (en) * 2008-09-05 2010-03-10 索尼株式会社 Image processing method, image processing apparatus, program and image processing system
CN103312981A (en) * 2013-03-22 2013-09-18 中科创达软件股份有限公司 Synthetic multi-picture taking method and shooting device
CN105976336A (en) * 2016-05-06 2016-09-28 安徽伟合电子科技有限公司 Fuzzy repair method of video image
CN106550184A (en) * 2015-09-18 2017-03-29 中兴通讯股份有限公司 Photo processing method and device
CN106651762A (en) * 2016-12-27 2017-05-10 努比亚技术有限公司 Photo processing method, device and terminal
CN109634494A (en) * 2018-11-12 2019-04-16 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109816613A (en) * 2019-02-28 2019-05-28 广州华多网络科技有限公司 Image completion method and device
CN111124227A (en) * 2019-12-18 2020-05-08 维沃移动通信有限公司 Image display method and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667296A (en) * 2008-09-05 2010-03-10 索尼株式会社 Image processing method, image processing apparatus, program and image processing system
CN103312981A (en) * 2013-03-22 2013-09-18 中科创达软件股份有限公司 Synthetic multi-picture taking method and shooting device
CN106550184A (en) * 2015-09-18 2017-03-29 中兴通讯股份有限公司 Photo processing method and device
CN105976336A (en) * 2016-05-06 2016-09-28 安徽伟合电子科技有限公司 Fuzzy repair method of video image
CN106651762A (en) * 2016-12-27 2017-05-10 努比亚技术有限公司 Photo processing method, device and terminal
CN109634494A (en) * 2018-11-12 2019-04-16 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109816613A (en) * 2019-02-28 2019-05-28 广州华多网络科技有限公司 Image completion method and device
CN111124227A (en) * 2019-12-18 2020-05-08 维沃移动通信有限公司 Image display method and electronic equipment

Similar Documents

Publication Publication Date Title
US10284789B2 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
US10872420B2 (en) Electronic device and method for automatic human segmentation in image
CN104680501B (en) The method and device of image mosaic
WO2018082185A1 (en) Image processing method and device
WO2018068420A1 (en) Image processing method and apparatus
US20160301868A1 (en) Automated generation of panning shots
WO2019221013A4 (en) Video stabilization method and apparatus and non-transitory computer-readable medium
WO2014187265A1 (en) Photo-capture processing method, device and computer storage medium
CN101593353A (en) Image processing method and equipment and video system
CN109948525A (en) It takes pictures processing method, device, mobile terminal and storage medium
CN102496147A (en) Image processing device, image processing method and image processing system
CN105654451A (en) Image processing method and device
KR101593316B1 (en) Method and apparatus for recontructing 3-dimension model using stereo camera
CN106296574A (en) 3-d photographs generates method and apparatus
CN111161136B (en) Image blurring method, image blurring device, equipment and storage device
CN112598628A (en) Image occlusion detection method and device, shooting equipment and medium
CN114615480A (en) Projection picture adjusting method, projection picture adjusting device, projection picture adjusting apparatus, storage medium, and program product
CN109600667B (en) Video redirection method based on grid and frame grouping
CN112819937B (en) Self-adaptive multi-object light field three-dimensional reconstruction method, device and equipment
JP2009047496A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
JP2009047498A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
JP2009047495A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
CN112083864A (en) Method, device and equipment for processing object to be deleted
CN105893578A (en) Method and device for selecting photos
US10552970B2 (en) Efficient guide filter for depth refinement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 901, building 1, jinlitong financial center building, No. 1100, Xingye Road, Haiwang community, Xin'an street, Bao'an District, Shenzhen, Guangdong Province

Applicant after: Shenzhen KUSAI Communication Technology Co.,Ltd.

Address before: 518000 17th Floor, Block A, Financial Science and Technology Building, 11 Keyuan Road, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN PRIZE INTELLIGENT TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 901, building 1, jinlitong financial center building, No. 1100, Xingye Road, Haiwang community, Xin'an street, Bao'an District, Shenzhen, Guangdong Province

Applicant after: Kusai Communication Technology Co.,Ltd.

Address before: 518000 901, building 1, jinlitong financial center building, No. 1100, Xingye Road, Haiwang community, Xin'an street, Bao'an District, Shenzhen, Guangdong Province

Applicant before: Kusai Communication Technology Co.,Ltd.

Address after: 518000 901, building 1, jinlitong financial center building, No. 1100, Xingye Road, Haiwang community, Xin'an street, Bao'an District, Shenzhen, Guangdong Province

Applicant after: Kusai Communication Technology Co.,Ltd.

Address before: 518000 901, building 1, jinlitong financial center building, No. 1100, Xingye Road, Haiwang community, Xin'an street, Bao'an District, Shenzhen, Guangdong Province

Applicant before: Shenzhen KUSAI Communication Technology Co.,Ltd.