CN113920023A - Image processing method and device, computer readable medium and electronic device - Google Patents

Image processing method and device, computer readable medium and electronic device Download PDF

Info

Publication number
CN113920023A
CN113920023A CN202111151629.2A CN202111151629A CN113920023A CN 113920023 A CN113920023 A CN 113920023A CN 202111151629 A CN202111151629 A CN 202111151629A CN 113920023 A CN113920023 A CN 113920023A
Authority
CN
China
Prior art keywords
image
processed
data
depth data
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111151629.2A
Other languages
Chinese (zh)
Other versions
CN113920023B (en
Inventor
宫振飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111151629.2A priority Critical patent/CN113920023B/en
Publication of CN113920023A publication Critical patent/CN113920023A/en
Application granted granted Critical
Publication of CN113920023B publication Critical patent/CN113920023B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image processing method, an image processing device, a computer readable medium and an electronic device, and relates to the technical field of image processing. The method comprises the following steps: acquiring depth data of an image to be processed, and layering the image to be processed according to the depth data to obtain a preset number of image layers to be processed; data restoration is carried out on each image layer to be processed to obtain a restored image layer corresponding to each image layer to be processed; and generating a three-dimensional scene image corresponding to the image to be processed based on the preset number of the repaired image layers. According to the method, the images to be processed are layered in the preset number, and data restoration is performed on each layer, so that the execution times of data restoration can be effectively controlled, the process is simple, and the running speed is high; meanwhile, data restoration can be performed through a single-frame image, so that the process that a large amount of data is needed to perform large-scale calculation in the related technology is avoided, and the requirement of image restoration on the calculation resources of restoration equipment is further reduced.

Description

Image processing method and device, computer readable medium and electronic device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer-readable medium, and an electronic device.
Background
With the progress of science and technology, computing vision is widely applied in real life. However, the RGB camera used in daily life can only acquire color data and corresponding depth data in a visual field, and cannot acquire partial color data and depth information of some objects in a scene, which causes data loss at corresponding positions on an image and seriously affects the quality of a reconstructed three-dimensional scene image. For example, when there is an occlusion in the environment, the camera cannot obtain partial color and partial depth information of the occluded object, resulting in a large "hole" in the image data.
In the related art, a large number of consecutive frame images are generally used to repair a cavity region included in image data, so as to improve the quality of a three-dimensional scene image. However, this method requires a large number of consecutive frame images as input, and at the same time, operations such as camera calibration are required to calculate information such as depth, which is complicated, and requires high computational resources of the image processing apparatus.
Disclosure of Invention
The present disclosure is directed to an image processing method, an image processing apparatus, a computer readable medium, and an electronic device, so as to simplify an image data restoration process at least to a certain extent, improve an image data restoration speed, and avoid limitation on computing resources of the image restoration device.
According to a first aspect of the present disclosure, there is provided an image processing method including: acquiring depth data of an image to be processed, and layering the image to be processed according to the depth data to obtain a preset number of image layers to be processed; data restoration is carried out on each image layer to be processed to obtain a restored image layer corresponding to each image layer to be processed; and generating a three-dimensional scene image corresponding to the image to be processed based on the preset number of the repaired image layers.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising: the image layering module is used for acquiring depth data of the image to be processed and layering the image to be processed according to the depth data to obtain a preset number of image layers to be processed; the image restoration module is used for restoring data of each image layer to be processed so as to obtain a restored image layer corresponding to each image layer to be processed; and the image generation module is used for generating a three-dimensional scene image corresponding to the image to be processed based on the preset number of the repaired image layers.
According to a third aspect of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the above-mentioned method.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, comprising: a processor; and memory storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the above-described method.
According to the image processing method provided by the embodiment of the disclosure, image layering is performed on an image to be processed by acquiring depth data of the image to be processed, so that a preset number of image layers to be processed are obtained; and then respectively carrying out data restoration on each layer of image layer to be processed so as to generate a three-dimensional scene image corresponding to the image to be processed according to the restored image layers corresponding to the preset number of image layers to be processed. The image to be processed is layered in a preset number, and data restoration is respectively carried out on each layer, so that the execution times of data restoration can be effectively controlled, the process is simple, and the running speed is high; meanwhile, data restoration can be performed through a single-frame image, so that the process that a large amount of data is needed to perform large-scale calculation in the related technology is avoided, and the requirement of image restoration on the calculation resources of restoration equipment is further reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which embodiments of the present disclosure may be applied;
FIG. 2 shows a schematic diagram of an electronic device to which embodiments of the present disclosure may be applied;
FIG. 3 schematically illustrates a flow chart of a method of image processing in an exemplary embodiment of the disclosure;
FIG. 4 is a schematic diagram illustrating one embodiment of obtaining depth data of an image to be processed according to the present disclosure;
FIG. 5 schematically illustrates a comparison before and after a noise reduction process in an exemplary embodiment of the disclosure;
FIG. 6 schematically illustrates a comparison before and after an edge-preserving filtering process in an exemplary embodiment of the disclosure;
FIG. 7 schematically illustrates a comparison of a void repair process before and after the void repair process in an exemplary embodiment of the present disclosure;
FIG. 8 is a diagram schematically illustrating a comparison between before and after an edge cropping process in an exemplary embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating layering of images to be processed to obtain three image layers to be processed according to an exemplary embodiment of the disclosure;
FIG. 10 schematically illustrates an exemplary image for data restoration processing of an image layer to be processed in an exemplary embodiment of the disclosure;
FIG. 11 is a schematic view of a mirror path in a horizontal mirror movement mode in an exemplary embodiment of the present disclosure;
FIG. 12 schematically illustrates a mirror travel path in a circular mirror travel mode in an exemplary embodiment of the present disclosure;
fig. 13 schematically shows a composition diagram of an image processing apparatus in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 is a schematic diagram illustrating a system architecture of an exemplary application environment to which an image processing method and apparatus according to an embodiment of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The terminal devices 101, 102, 103 may be various electronic devices having an image processing function, including but not limited to desktop computers, portable computers, smart phones, tablet computers, and the like. It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The image processing method provided by the embodiment of the present disclosure can be executed by the terminal devices 101, 102, 103, and accordingly, the image processing apparatus is generally disposed in the terminal devices 101, 102, 103. However, it is easily understood by those skilled in the art that the image processing method provided in the embodiment of the present disclosure may also be executed by the server 105, and accordingly, the image processing apparatus may also be disposed in the server 105, which is not particularly limited in the exemplary embodiment. For example, in an exemplary embodiment, a user may obtain an image to be processed and depth data through a camera module and a depth sensor, which are included in the terminal devices 101, 102, and 103 and used for acquiring the image and depth data, and then upload the image to be processed and the depth data to the server 105, and after the server 105 generates a three-dimensional scene image by using the image processing method provided by the embodiment of the present disclosure, the three-dimensional scene image is transmitted to the terminal devices 101, 102, and 103, and so on.
An exemplary embodiment of the present disclosure provides an electronic device for implementing an image processing method, which may be the terminal device 101, 102, 103 or the server 105 in fig. 1. The electronic device comprises at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the image processing method via execution of the executable instructions.
The following takes the mobile terminal 200 in fig. 2 as an example, and exemplifies the configuration of the electronic device. It will be appreciated by those skilled in the art that the configuration of figure 2 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, mobile terminal 200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the components is only schematically illustrated and does not constitute a structural limitation of the mobile terminal 200. In other embodiments, the mobile terminal 200 may also interface differently than shown in fig. 2, or a combination of multiple interfaces.
As shown in fig. 2, the mobile terminal 200 may specifically include: a processor 210, an internal memory 221, an external memory interface 222, a Universal Serial Bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor module 280, a display 290, a camera module 291, an indicator 292, a motor 293, a button 294, and a Subscriber Identity Module (SIM) card interface 295. Wherein the sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, and the like.
Processor 210 may include one or more processing units, such as: the Processor 210 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
The NPU is a Neural-Network (NN) computing processor, which processes input information quickly by using a biological Neural Network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the mobile terminal 200, for example: image recognition, face recognition, speech recognition, text understanding, and the like. In some embodiments, data repair, scene recognition, execution of segmentation algorithms, etc. may be implemented by the NPU unit.
The mobile terminal 200 implements a display function through the GPU, the display screen 290, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 290 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information. In some embodiments, it may be mentioned that the processes of generating images of three-dimensional scenes, generating three-dimensional videos, and the like are implemented by a GPU.
The mobile terminal 200 may implement a photographing function through the ISP, the camera module 291, the video codec, the GPU, the display screen 290, the application processor, and the like. The ISP is used for processing data fed back by the camera module 291; the camera module 291 is used for capturing still images or videos; the digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals; the video codec is used to compress or decompress digital video, and the mobile terminal 200 may also support one or more video codecs. In some embodiments, a plurality of camera modules may be provided in the mobile terminal to capture binocular images.
The depth sensor 2801 is used to acquire depth information of a scene. In some embodiments, a depth sensor may be disposed in the camera module 291 for acquiring the to-be-processed image and the depth data of the to-be-processed image at the same time.
The pressure sensor 2802 is used to sense a pressure signal and convert the pressure signal into an electrical signal. The gyro sensor 2803 may be used to determine a motion gesture of the mobile terminal 200. In addition, other functional sensors, such as an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc., may be provided in the sensor module 280 according to actual needs.
In the related art, a large number of consecutive frame images are generally used to repair a cavity region included in image data, so as to improve the quality of a three-dimensional scene image. Specifically, a large number of continuous frame images are used as input, and a three-dimensional point cloud can be obtained through the processes of camera calibration, feature point detection, feature point matching and the like, so that a three-dimensional scene image can be generated according to the three-dimensional point cloud, a texture map and the like. However, this method requires a large number of consecutive frame images as input, and at the same time, operations such as camera calibration are required to calculate information such as depth, which is complicated, and requires high computational resources of the image processing apparatus.
Based on one or more of the problems described above, the present exemplary embodiment provides an image processing method. The image processing method may be applied to the server 105, and may also be applied to one or more of the terminal devices 101, 102, and 103, which is not particularly limited in the present exemplary embodiment. Referring to fig. 3, the image processing method may include the following steps S310 to S330:
in step S310, depth data of the image to be processed is obtained, and image layering is performed on the image to be processed according to the depth data, so as to obtain a preset number of image layers to be processed.
The images to be processed can comprise binocular images acquired based on a binocular camera module, and can also comprise monocular images acquired based on a monocular camera module; correspondingly, the depth data of the image to be processed can comprise absolute depth data acquired by a monocular camera module and also can comprise relative depth data acquired by a binocular camera module. For example, for a binocular handset, relative depth data for images captured by the main camera module may be obtained.
It should be noted that, in an exemplary embodiment, the terminal device for acquiring the image to be processed may not be equipped with a device for acquiring depth data, for example, a mobile phone is not provided with a depth sensor. At this time, referring to fig. 4, depth prediction may be performed on the acquired image to be processed based on a depth prediction network, so as to predict depth data corresponding to the image to be processed; the relative depth data of the image collected by the main camera module can be output based on binocular image based on modes such as a binocular matching algorithm.
In an exemplary embodiment, after the depth data is obtained, before the image to be processed is layered according to the depth data, the depth data may be preprocessed, and then the image to be processed is layered according to the processed depth data.
In an exemplary embodiment, the pre-processing process may include one or more of the following processing modes in combination: nonlinear mapping processing, noise reduction processing, edge-preserving filtering processing, cavity restoration processing and edge cutting processing.
In an exemplary embodiment, non-linear mapping may be performed for all depth data, and the acquired depth data may be converted into relative depth data of a short distance and then noise reduction may be performed based on the converted relative depth data.
In an exemplary embodiment, the process of non-linear mapping may be implemented by the following equation (1):
Figure BDA0003287365910000071
wherein (d)ij)newRepresenting the mapped depth data; k represents a preset background depth mapping coefficient; (d)ij)oldRepresenting depth data before mapping; n represents the total amount of depth data. By the above formula (1), the depth of field of the mapped depth data can be adjusted.
In an exemplary embodiment, the noise reduction processing may be performed on all depth data, and when the noise reduction processing is performed, the noise reduction may be performed on the depth data in any noise reduction manner. For example, the noise of the depth data can be processed by mean depth determination, connected component detection, gaussian filtering and denoising, etc., as shown in fig. 5.
In an exemplary embodiment, the depth data may be subjected to edge-preserving filtering processing with respect to edge data in the depth data. In particular, edge-preserving filtering may perform median filtering on only a region with a large depth on an edge region in the depth data, while preserving the original depth data of a region with a small depth, as shown in fig. 6. By the processing mode of edge-preserving filtering, local action can be realized, and edges are sharpened so as to distinguish objects with different depths in the image to be processed; meanwhile, the complexity of median filtering can be reduced, and the depth data of the complete edge of an object with shallow depth in the image to be processed is reserved.
In an exemplary embodiment, in some strong exposure and weak texture regions, prediction errors may exist in the depth data obtained by the depth prediction network or the binocular matching algorithm, so that a hole repairing process may be performed on a depth data hole in the image to be processed to optimize the depth data, as shown in fig. 7. Specifically, connected domains of different depths can be obtained by using a preset depth threshold, regions in the depth data, which obviously have correlation with the depth of surrounding objects but have a large difference in average depth value, are screened according to the size of the connected domains, the regions are determined as void regions with prediction errors, and an adjacent depth with the maximum correlation is found around the void regions through a dilation strategy to cover the depth data.
In an exemplary embodiment, the depth data may also be subjected to an edge clipping process for an image to be processed in which there are many smaller objects. Specifically, edge images between objects of different depths can be obtained through a preset depth threshold, then each edge in the edge images is subjected to size judgment, and when the size of the edge is smaller than the preset size threshold, the depth data of the position where the edge is located is subjected to smoothing processing to cut out the small-size edge, as shown in fig. 8. By cutting the small-size edge, the complexity of the image depth data can be reduced, and the complexity of subsequent image restoration processing is further reduced.
In an exemplary embodiment, after the depth data of the image to be processed is obtained, the image to be processed may be layered according to the depth data, so as to obtain a preset number of image layers to be processed. When layering is performed based on depth data, pixels in an image to be processed can be divided into a plurality of connected regions based on the depth data corresponding to each pixel in the image to be processed, and then the plurality of connected regions are classified based on a segmentation algorithm to obtain a plurality of types of connected region sets; and then, combining the connected region sets according to a preset rule to obtain a preset number of image layers to be processed.
The adopted segmentation algorithm can be a semantic segmentation algorithm, a panoramic segmentation algorithm and other segmentation algorithms which can distinguish different objects in a scene; the preset rules can be set differently according to different types of images to be processed. For example, for an image to be processed containing a portrait, since the portrait is usually a main area, the preset rule may be set as: taking a connected region set which is identified by a segmentation algorithm and belongs to a portrait region as a layer; combining the connected region sets in which other objects identified by the segmentation algorithm are located to serve as a layer; the connected region sets belonging to the background and identified by the segmentation algorithm are combined to form a layer, and three image layers to be processed are obtained, as shown in fig. 9.
The image to be processed is layered through the depth data and the semantic segmentation algorithm, so that a main object in the image to be processed can be distinguished from other objects, and the depth data among layers can be prevented from being influenced when data restoration is carried out on each image layer to be processed subsequently; meanwhile, the execution times of the image restoration algorithm can be controlled through the preset number of layers, so that the image restoration process is simplified, and the running speed is increased.
In step S320, data restoration is performed on each to-be-processed image layer to obtain a restored image layer corresponding to each to-be-processed image layer.
In an exemplary embodiment, after obtaining a preset number of to-be-processed image layers, data restoration may be performed on each to-be-processed image layer, so as to obtain a restored image layer corresponding to each to-be-processed image layer.
Specifically, for each image layer to be processed, edge data to be processed in the image layer to be processed is extracted first, and the edge data to be processed is repaired, so that the image to be processed is converted into edge data of an unknown area in the three-dimensional scene image through known edge prediction in the image layer to be processed. Because the edge data can well reflect the structural data when the image to be processed is converted into the three-dimensional scene image, the repaired edge data can be used as prior data. Predicting and converting the image to be processed into the depth data of an unknown region in the three-dimensional scene image by using the restored edge data to the known depth data in the image layer to be processed so as to restore the depth data of the image layer to be processed; meanwhile, the color data of the image layer to be processed can be restored by utilizing the restoration edge data to predict the known color data in the image layer to be processed and converting the image to be processed into the color data of the unknown area in the three-dimensional scene image; and then generating a complete repaired image layer based on the repaired edge data, the repaired depth data and the repaired color data.
For example, in an exemplary embodiment, edge data, depth data, and color data of an image may be data-repaired by a depth learning-based image repair network. Specifically, referring to fig. 10 (the processing procedure of the main object layer is detailed in fig. 10, the processing procedures of the secondary object layer and the background layer are not detailed, and the processing procedures are the same as those of the main object layer), assuming that the image to be processed is an image including a portrait, three image layers to be processed are obtained by layering based on a preset rule, and the three image layers to be processed are the main object layer, the secondary object layer and the background layer respectively. For each of the above three layers, the following processing is required: edge data contained in the layer can be extracted firstly (because the image layer to be processed is actually a part of the image to be processed, the obtained edge data is actually a local edge), and the edge data is repaired through an edge repairing model to obtain repaired edge data corresponding to the layer; then extracting the depth data of the layer, inputting the repair edge data and the depth data of the layer into a depth repair model together, and performing data repair on the depth data of the layer; meanwhile, the color data of the layer may be extracted, and the repaired edge data and the color data of the layer are input into the color repair model together to perform data repair on the color data of the layer, for example, when performing color repair on the main object layer, the missing part (such as the repair area in fig. 10) of the known image in the main object layer may be repaired based on the repaired edge data and the color data; and then generating a repaired image layer through the obtained repaired edge data, the repaired depth data and the repaired color data.
In step S330, a three-dimensional scene image corresponding to the image to be processed is generated based on the preset number of restored image layers.
In an exemplary embodiment, after obtaining a preset number of restored image layers corresponding to a preset number of image layers to be processed, data of the restored image layers of each layer may be integrated, and the integrated data of the restored image layers is rendered by an image rendering device such as a GPU, so as to obtain a three-dimensional scene image corresponding to the image to be processed.
Furthermore, in an exemplary embodiment, after obtaining the three-dimensional scene image, a three-dimensional video may be generated according to the three-dimensional scene image. Specifically, scene recognition may be performed on the three-dimensional scene image to determine a scene type corresponding to the three-dimensional scene image, then a mirror moving route corresponding to the three-dimensional scene may be determined according to different scene types, and a corresponding three-dimensional video may be generated based on the mirror moving route and the three-dimensional scene image.
In an exemplary embodiment, a segmentation algorithm may be used to identify a class of an object included in a three-dimensional scene image, and then a scene is divided according to the class of the object in a segmentation result to obtain a scene type; and then determining a mirror moving route corresponding to the three-dimensional scene according to a corresponding relation between a predefined scene type and the mirror moving route, and then generating a corresponding three-dimensional video based on the mirror moving route and the three-dimensional scene image.
The adopted segmentation algorithm can be a semantic segmentation algorithm, a panoramic segmentation algorithm and other segmentation algorithms which can distinguish different objects in a scene.
For example, when an object (such as a sofa, a television, a dining table, etc.) appearing only in an indoor scene appears, the scene type of the three-dimensional scene image may be determined to be the indoor scene. When an object (such as an airplane, a bicycle, a ship, etc.) apparently appearing only in an outdoor scene appears, the scene type of the three-dimensional scene image can be determined to be the outdoor scene. When some objects (such as potted plants, cats, people) and the like appear in the scene and there are no obvious indoor and outdoor objects as described above, the scene type of the three-dimensional scene image can be determined as an uncertain scene. For an indoor scene, because the depth of detail is relatively clear, a mirror moving route can be determined in a three-dimensional scene image in a mirror moving mode of horizontally moving a mirror (as shown in fig. 11); for outdoor scenes, a mirror moving route can be determined in a three-dimensional scene image in a mirror moving mode of track pushing zooming (the effect that the foreground is fixed and the background fov is enlarged) so as to reduce the problems of foreground boundary fracture and the like caused by inaccurate depth images; for uncertain scenes, corresponding moving mirrors can be set based on the number of objects of different types contained in the scene, for example, when a pot is filled with a lot of such objects, a moving mirror path can be determined in a three-dimensional scene image in a circular moving mirror mode (as shown in fig. 12).
After the mirror movement mode is determined, different mirror movement route starting points can be selected in the three-dimensional scene according to different mirror movement modes. For example, for a horizontal mirror, the coordinates of the starting point can be set in advance; for example, for the circular mirror, the center, radius, starting direction, etc. of the circular mirror can be set in advance, which is not limited by the present disclosure.
In an exemplary embodiment, after the mirror movement route is determined, a three-dimensional scene may be established according to a three-dimensional scene image, that is, a repaired scene is converted into a triangular mesh through depth data included in the three-dimensional scene image, and then a face color is determined by using color data of three vertices of each triangle, so as to complete establishment of the three-dimensional scene; and then acquiring a visual angle image corresponding to each route point on the mirror moving route in the three-dimensional scene, and then connecting the corresponding visual angle images together based on the sequence of each route point on the mirror moving route to generate video output.
In summary, in the exemplary embodiment, on the one hand, on the premise that a three-dimensional video can be generated based on an image, and the interest and playability of camera photographing are greatly increased, the required acquired data amount is small and the acquisition is easy; on the other hand, images which are shot in history and do not contain depth data can be converted into three-dimensional videos in a depth prediction network, a binocular matching algorithm and other modes; on the other hand, by the deep noise reduction processing scheme, the complexity of the image depth is reduced, the discrimination of different targets is improved, and the execution times of data restoration can be effectively controlled; in addition, the scenes are classified through a segmentation technology, and a mode of determining a mirror moving route by using a self-adaptive mirror moving method is utilized, so that the complexity of the three-dimensional scene is reduced, and the visual effect of generating the three-dimensional video is improved.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, as shown in fig. 13, the embodiment of the present example also provides an image processing apparatus 1300, which includes an image layering module 1310, an image repairing module 1320, and an image generating module 1330. Wherein:
the image layering module 1310 may be configured to obtain depth data of an image to be processed, and perform image layering on the image to be processed according to the depth data to obtain a preset number of image layers to be processed.
The image restoration module 1320 may be configured to perform data restoration on each to-be-processed image layer to obtain a restored image layer corresponding to each to-be-processed image layer.
The image generation module 1330 may be configured to generate a three-dimensional scene image corresponding to the image to be processed based on the preset number of the repaired image layers.
In an exemplary embodiment, the image processing apparatus may further include a video generation module, which may be configured to perform scene recognition on the three-dimensional scene image to determine a scene type corresponding to the three-dimensional scene image; determining a mirror moving route corresponding to the three-dimensional scene image according to the scene type; and generating a corresponding three-dimensional video based on the mirror moving route and the three-dimensional scene image.
In an exemplary embodiment, the video generation module may be further configured to establish a three-dimensional scene according to the three-dimensional scene image, and acquire a view angle image corresponding to each route point on the mirror moving route in the three-dimensional scene; and connecting the view angle images based on the sequence of the route points on the mirror moving route to generate a three-dimensional video.
In an exemplary embodiment, the image layering module 1310 may be configured to pre-process the depth data, so as to perform image layering on the image to be processed according to the processed depth data to obtain a preset number of image layers to be processed.
In an exemplary embodiment, the pre-processing includes at least one of the following processing modes: nonlinear mapping processing, noise reduction processing, edge-preserving filtering processing, cavity restoration processing and edge cutting processing.
In an exemplary embodiment, the image layering module 1310 may be configured to divide the pixels in the image to be processed into a plurality of connected regions based on the depth data corresponding to each pixel in the image to be processed; classifying the plurality of connected regions based on a segmentation algorithm to obtain a plurality of types of connected region sets; and combining the connected region sets according to a preset rule to obtain a preset number of image layers to be processed.
In an exemplary embodiment, the image repairing module 1320 may be configured to extract to-be-processed edge data in the to-be-processed image layer, and perform edge repairing on the to-be-processed edge data to obtain repaired edge data; performing data restoration on the depth data of the image layer to be processed based on the restoration edge data to obtain restored depth data; performing color restoration on the color data of the image layer to be processed based on the restored edge data to obtain restored color data; a restored image layer is generated based on the restored edge data, the restored depth data, and the restored color data.
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 3 may be performed.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring depth data of an image to be processed, and carrying out image layering on the image to be processed according to the depth data to obtain a preset number of image layers to be processed;
performing data restoration on each image layer to be processed to obtain a restored image layer corresponding to each image layer to be processed;
and generating a three-dimensional scene image corresponding to the image to be processed based on the preset number of the repaired image layers.
2. The method of claim 1, further comprising:
carrying out scene recognition on the three-dimensional scene image to determine a scene type corresponding to the three-dimensional scene image;
determining a mirror moving route corresponding to the three-dimensional scene image according to the scene type;
and generating a corresponding three-dimensional video based on the mirror moving route and the three-dimensional scene image.
3. The method of claim 2, wherein generating the corresponding three-dimensional video based on the mirror path and the three-dimensional scene image comprises:
establishing a three-dimensional scene according to the three-dimensional scene image, and acquiring a view angle image corresponding to each route point on the mirror moving route in the three-dimensional scene;
and connecting the view angle images based on the sequence of the route points on the mirror moving route to generate a three-dimensional video.
4. The method according to claim 1, wherein before the image layering of the image to be processed according to the depth data obtains a preset number of image layers to be processed, the method further comprises:
and preprocessing the depth data so as to carry out image layering on the image to be processed according to the processed depth data to obtain a preset number of image layers to be processed.
5. The method of claim 4, wherein the pre-processing comprises at least one of:
nonlinear mapping processing, noise reduction processing, edge-preserving filtering processing, cavity restoration processing and edge cutting processing.
6. The method according to claim 1, wherein the image layering the to-be-processed image according to the depth data to obtain a preset number of to-be-processed image layers comprises:
dividing pixels in an image to be processed into a plurality of connected regions based on depth data corresponding to each pixel in the image to be processed;
classifying the plurality of connected regions based on a segmentation algorithm to obtain a plurality of types of connected region sets;
and combining the connected region sets according to a preset rule to obtain a preset number of image layers to be processed.
7. The method according to claim 1, wherein the performing data restoration for each of the image layers to be processed comprises:
extracting to-be-processed edge data in the to-be-processed image layer, and performing edge repairing on the to-be-processed edge data to obtain repaired edge data;
performing data restoration on the depth data of the image layer to be processed based on the restored edge data to obtain restored depth data;
performing color restoration on the color data of the image layer to be processed based on the restored edge data to obtain restored color data;
generating the repair image layer based on the repair edge data, the repair depth data, and the repair color data.
8. An image processing apparatus characterized by comprising:
the image layering module is used for acquiring depth data of an image to be processed and layering the image to be processed according to the depth data to obtain a preset number of image layers to be processed;
the image restoration module is used for restoring data of each image layer to be processed to obtain a restored image layer corresponding to each image layer to be processed;
and the image generation module is used for generating a three-dimensional scene image corresponding to the image to be processed based on the preset number of the repaired image layers.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-7 via execution of the executable instructions.
CN202111151629.2A 2021-09-29 2021-09-29 Image processing method and device, computer readable medium and electronic equipment Active CN113920023B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111151629.2A CN113920023B (en) 2021-09-29 2021-09-29 Image processing method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111151629.2A CN113920023B (en) 2021-09-29 2021-09-29 Image processing method and device, computer readable medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113920023A true CN113920023A (en) 2022-01-11
CN113920023B CN113920023B (en) 2024-10-15

Family

ID=79237218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111151629.2A Active CN113920023B (en) 2021-09-29 2021-09-29 Image processing method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113920023B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979482A (en) * 2022-05-23 2022-08-30 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium
CN116109753A (en) * 2023-04-12 2023-05-12 深圳原世界科技有限公司 Three-dimensional cloud rendering engine platform and data processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609974A (en) * 2012-03-14 2012-07-25 浙江理工大学 Virtual viewpoint image generation process on basis of depth map segmentation and rendering
KR20130067474A (en) * 2011-12-14 2013-06-24 연세대학교 산학협력단 Hole filling method and apparatus
CN106851247A (en) * 2017-02-13 2017-06-13 浙江工商大学 Complex scene layered approach based on depth information
CN110290374A (en) * 2019-06-28 2019-09-27 宝琳创展国际文化科技发展(北京)有限公司 A kind of implementation method of naked eye 3D
CN111760286A (en) * 2020-06-29 2020-10-13 完美世界(北京)软件科技发展有限公司 Switching method and device of mirror operation mode, storage medium and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130067474A (en) * 2011-12-14 2013-06-24 연세대학교 산학협력단 Hole filling method and apparatus
CN102609974A (en) * 2012-03-14 2012-07-25 浙江理工大学 Virtual viewpoint image generation process on basis of depth map segmentation and rendering
CN106851247A (en) * 2017-02-13 2017-06-13 浙江工商大学 Complex scene layered approach based on depth information
CN110290374A (en) * 2019-06-28 2019-09-27 宝琳创展国际文化科技发展(北京)有限公司 A kind of implementation method of naked eye 3D
CN111760286A (en) * 2020-06-29 2020-10-13 完美世界(北京)软件科技发展有限公司 Switching method and device of mirror operation mode, storage medium and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979482A (en) * 2022-05-23 2022-08-30 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium
CN116109753A (en) * 2023-04-12 2023-05-12 深圳原世界科技有限公司 Three-dimensional cloud rendering engine platform and data processing method

Also Published As

Publication number Publication date
CN113920023B (en) 2024-10-15

Similar Documents

Publication Publication Date Title
US20210158533A1 (en) Image processing method and apparatus, and storage medium
CN112927362B (en) Map reconstruction method and device, computer readable medium and electronic equipment
CN106682632B (en) Method and device for processing face image
CN113822977A (en) Image rendering method, device, equipment and storage medium
US20110148868A1 (en) Apparatus and method for reconstructing three-dimensional face avatar through stereo vision and face detection
CN112598780B (en) Instance object model construction method and device, readable medium and electronic equipment
CN110136144B (en) Image segmentation method and device and terminal equipment
CN113658065B (en) Image noise reduction method and device, computer readable medium and electronic equipment
CN108388889B (en) Method and device for analyzing face image
CN112927271B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN113920023B (en) Image processing method and device, computer readable medium and electronic equipment
CN114092678A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113610720A (en) Video denoising method and device, computer readable medium and electronic device
CN110827341A (en) Picture depth estimation method and device and storage medium
CN113177892A (en) Method, apparatus, medium, and program product for generating image inpainting model
CN114677422A (en) Depth information generation method, image blurring method and video blurring method
CN114049417B (en) Virtual character image generation method and device, readable medium and electronic equipment
CN114140320B (en) Image migration method and training method and device of image migration model
CN117218246A (en) Training method and device for image generation model, electronic equipment and storage medium
CN113688839B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
CN116757970B (en) Training method of video reconstruction model, video reconstruction method, device and equipment
CN115731326A (en) Virtual role generation method and device, computer readable medium and electronic device
CN111814811B (en) Image information extraction method, training method and device, medium and electronic equipment
CN109816791B (en) Method and apparatus for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant