CN114240963A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114240963A
CN114240963A CN202111417587.2A CN202111417587A CN114240963A CN 114240963 A CN114240963 A CN 114240963A CN 202111417587 A CN202111417587 A CN 202111417587A CN 114240963 A CN114240963 A CN 114240963A
Authority
CN
China
Prior art keywords
region
feature
image
depth
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111417587.2A
Other languages
Chinese (zh)
Inventor
林子尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Sonar Sky Information Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonar Sky Information Consulting Co ltd filed Critical Sonar Sky Information Consulting Co ltd
Priority to CN202111417587.2A priority Critical patent/CN114240963A/en
Publication of CN114240963A publication Critical patent/CN114240963A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The application discloses an image processing method, an image processing device, a storage medium and an electronic device, wherein the method comprises the following steps: determining a first characteristic region in an image, and acquiring a depth image of the image and a segmentation image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region; acquiring feature depth information of the first feature region pair according to the depth image; and correcting the second characteristic region in the segmented image according to the characteristic depth information of the first characteristic region pair. By adopting the application embodiment, the accuracy of segmenting the characteristic region in the image can be enhanced, and the reliability of image processing can be improved.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
In recent years, photography has become an indispensable part of the digital era with the development of social networks and self-media, and image processing techniques have also come into play. Image segmentation techniques have also been gaining attention as an important part of image processing techniques, for example, segmentation of a portrait region in an image by an image segmentation technique, separation of the portrait region from a background, and highlighting or blurring of the portrait region.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and electronic equipment, which can enhance the accuracy of segmenting a feature region in an image and improve the reliability of image processing. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
determining a first characteristic region in an image, and acquiring a depth image of the image and a segmentation image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region;
acquiring feature depth information corresponding to the first feature region according to the depth image;
and correcting the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the image acquisition module is used for determining a first characteristic region in an image, and acquiring a depth image of the image and a segmentation image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region;
the information acquisition module is used for acquiring feature depth information corresponding to the first feature region according to the depth image;
and the image correction module is used for correcting the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
according to the characteristic depth information of the first characteristic region, the second characteristic region segmented by the image is corrected, namely, a reference basis is provided for image segmentation through the depth information, so that a more accurate second characteristic region is obtained, errors and instability caused by only adopting a single algorithm of the depth information or the segmentation algorithm in the related technology are improved, the influence of wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of an image processing method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 3A is a schematic diagram of an image including a first feature region according to an embodiment of the present application;
fig. 3B is a schematic view of a depth image corresponding to an image provided in an embodiment of the present application;
fig. 3C is a schematic diagram of a segmented image including a second feature region according to an embodiment of the present application;
fig. 4 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of image and depth image alignment provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a depth image including depth feature information according to an embodiment of the present application;
fig. 7 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a second feature region modified according to depth feature information according to an embodiment of the present application;
fig. 9 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is noted that, unless explicitly stated or limited otherwise, "including" and "having" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The present application will be described in detail with reference to specific examples.
In recent years, photography has become an indispensable part of the digital era with the development of social networks and self-media, and image processing techniques have also come into play. Image segmentation technology (Image segment) is also gaining importance as an important part of Image processing technology, for example, Image segmentation technology is used to segment a portrait region in one Image, separate the portrait region from a background, and perform a highlighting operation or a blurring operation on the portrait region.
As shown in fig. 1, which is an image schematic diagram of image processing provided in this embodiment of the present application, a left image is a photo taken by a user, and when the user wants to implement a "background blurring" function, that is, when a processor receives a trigger condition on a background blurring control, a portrait in the left image needs to be segmented, that is, a portrait area shown in a right image is obtained, so as to further perform blurring processing on a background area outside the portrait area.
Among the related image segmentation techniques, there are mainly three segmentation techniques. One is the conventional color space method, the edge feature-based method, the wavelet transform-based method, and the like. However, this technique is difficult to analyze for areas with similar colors, variations of light and shadow, or interference of image noise, and partial improvement can be obtained by referring to a larger range of information, but the operation cost is also increased. The other is semantic segmentation and instance segmentation based on deep learning (deep learning). However, the algorithm of deep learning such as semantic segmentation mainly depends on the design of the network architecture and the training data, and particularly, on the training data set, all target scenes need to be covered as much as possible, but the photographing behavior is varied, the viewing angles are different, the segmentation result is possibly affected by many factors such as irregular portrait occlusion (multi-person scene) and uncertain texture features, and thus, the problem of insufficient stability exists. The third is a depth map (depth map) based segmentation method, which can distinguish different objects through different distances but cannot resolve different objects at the same distance. And the depth generated by stereoscopic vision has the problem of parallax occlusion, and is not completely reliable.
Therefore, in view of the above problems, the present application provides an image processing method, which can enhance the accuracy of segmenting the feature region in the image and improve the reliability of image processing.
In one embodiment, as shown in fig. 2, a flowchart of an image processing method provided by the embodiment of the present application may be implemented by relying on a computer program and may be run on an image processing apparatus based on a von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application.
Specifically, the image processing method includes:
s101, according to a first characteristic region in an image, obtaining a depth image of the image and a segmentation image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region.
An image refers to a description or portrayal of a similarity, vividness, or of a natural thing or an objective object (human, animal, plant, landscape, etc.), or an image can be understood to be a representation of a natural thing or an objective object (human, animal, plant, landscape, etc.) that contains information about the object being described. The image is a picture with visual effect, such as a photo, a painting, a clip art, a map, a satellite cloud picture, a movie picture, an X-ray film, an electroencephalogram, an electrocardiogram and the like.
The first characteristic region in the image can be understood as a specific region in an interested region of the image, the interested region is also a second characteristic region, the second characteristic region includes the first characteristic region, and the region can be a single pixel point or a set of a plurality of pixel points. For example, the region of interest is a portrait region, the first feature region is a face region, the region of interest is an automobile region, the first feature region is a head region, the region of interest is a chest region, and the first feature region is a heart region.
It is to be understood that, in the following figures, the first feature region is taken as a face region, and the second feature region is taken as a portrait region for example, the present application also includes types and contents of other first feature regions and second feature regions, and the image processing method described in the present application is applicable to any image including the first feature region and the second feature region.
The first feature region in the image is obtained, and the obtaining method may be any one or more feature extraction algorithms in the related art, such as a face feature extraction algorithm, a deep learning network, a Local Binary Pattern (LBP) algorithm, and the like, which is not limited in this application. As shown in fig. 3A, it is a schematic diagram of an image including a first feature region provided in an embodiment of the present application, where the first feature region 101 is an image region corresponding to a human face in the figure.
The method comprises the steps of obtaining a depth image of an image, wherein the depth image comprises depth information, and the depth information can be understood as a distance value or a depth value from any point in the image to a camera, namely three-dimensional coordinate information of any point. The depth information is obtained by any one or more methods such as stereo image matching, depth camera or depth learning, which is not limited in this application. Fig. 3B is a schematic diagram of a depth image corresponding to an image according to an embodiment of the present disclosure, in which areas with different colors correspond to different depth values respectively.
And acquiring a segmentation image of the second characteristic region in the image. The segmented image includes the second feature region and at least one other region, and the segmented image may be obtained by any one or more of histogram thresholding, region growing, image-based random field model, and relaxed labeled region segmentation, which should not be construed as limiting.
As shown in fig. 3C, for a schematic diagram of a segmented image including a second feature region provided in the embodiment of the present application, the second feature region 102 is a portrait region obtained based on an image segmentation method, and the segmented image shown in fig. 3C further includes a first region 103, a second region 104, a third region 105, and a fourth region 106. The fourth region 106 is a background region obtained by an image segmentation method. It is to be understood that the division of the plurality of regions shown in fig. 3C is merely illustrative.
And S102, acquiring feature depth information corresponding to the first feature region according to the depth image.
The feature depth information corresponding to the first feature region may be understood as information representing a distance value between the first feature region and the camera. And obtaining feature depth information corresponding to the first feature region according to the depth image and the first feature region extracted from the image.
For example, as shown in fig. 3A, the first feature region 101 in the image obtains feature depth information corresponding to the first feature region 101 according to the depth map shown in fig. 3B.
S103, correcting the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
In the related image segmentation technology, the segmentation accuracy of the second feature region included in the segmented image is not satisfactory, so that the second feature region in the segmented image is corrected according to the feature depth information corresponding to the first feature region.
The principle of correction can be understood as follows: due to the relevance between the first feature area and the second feature area, that is, the second feature area and the first feature area are in the same plane, or the difference between the depth value of the second feature area and the depth value of the first feature area is smaller than a preset threshold, for example, when a person takes a self-portrait, the face and the portrait are in the same plane; and then, according to the characteristic depth information corresponding to the first characteristic region, a second characteristic region is estimated in the depth map, the second characteristic region corresponding to the depth map is compared with the second characteristic region in the segmentation image, and the second characteristic region in the segmentation image is corrected according to the comparison result.
In one embodiment, the correction method is: determining a target area belonging to a second characteristic area in the segmentation image according to the characteristic depth information corresponding to the first characteristic area; and correcting the second characteristic region according to the target region. The number of the target areas is one or more, and the target areas are single pixel points or a set of a plurality of pixel points.
For example, according to the feature depth information corresponding to the first feature region 101, a second feature region and a background region are obtained in the depth image shown in fig. 3B; comparing the second characteristic region in the depth image shown in fig. 3B with the segmented image shown in fig. 3C, where the segmented image shown in fig. 3C includes a second characteristic region 102, a first region 103, a second region 104, a third region 105, and a fourth region 106, where the fourth region 106 is a background region, and the target region is determined to be the first region 103, the second region 104, and the third region 105; the original second feature region 102 is corrected according to the target region.
In this embodiment, the second feature region of the segmented image is corrected, so that the problem that different objects at the same distance cannot be distinguished due to the fact that the second feature region is obtained according to the feature depth information of the first feature region in the depth image can be solved. Specifically, according to the depth image shown in fig. 3B, according to the depth feature information corresponding to the face region (i.e., the first feature region), the face region (i.e., the second feature region) is obtained to include a star-shaped decoration region 107, the star-shaped decoration region 107 can be understood as a plurality of falling star lights, and a user is in self-shooting among the star lights; when the portrait area 102 corresponding to the divided image shown in fig. 3C is corrected based on the portrait area corresponding to the depth image shown in fig. 3B, the star-shaped decoration area belongs to the background area 106 in the divided image, and the target area for further correction of the portrait area 102 does not include the star-shaped decoration area except the portrait area 102. Therefore, the method and the device solve the problem that when the image is segmented only according to the depth image, different objects at the same distance cannot be distinguished, and the inaccuracy is brought to image segmentation.
In one embodiment, the modifying the second feature region according to the target region includes: the target region is fused to the second feature region. For example, in the segmentation map shown in fig. 3C, the first region 103, the second region 104, and the third region 105, which are target regions, are fused to the second feature region 102.
In one embodiment, the image processing techniques presented herein may be applied to the field of video encoding. Specifically, each frame of video image included in the video is divided into a background region and a region of interest by using an image processing technique, where the region of interest is the second feature region, and the background region and the region of interest of each frame of video image are independently encoded, for example, a low-distortion encoding method is used for the region of interest, and a simple and efficient encoding method is used for the background region. By the embodiment, the video coding efficiency is improved, and the memory space is saved.
According to the method and the device, the second characteristic region segmented from the image is corrected according to the characteristic depth information of the first characteristic region, namely, the depth information provides reference for image segmentation, so that the more accurate second characteristic region is obtained, errors and instability caused by only adopting a single algorithm of the depth information or the segmentation algorithm in the related technology are improved, the influence of wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
In one embodiment, as shown in fig. 4, the flowchart of an image processing method provided by the embodiment of the present application may be implemented by relying on a computer program and may be run on an image processing apparatus based on a von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application.
S201, according to a first characteristic region in the image, obtaining a depth image of the image and a segmentation image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region.
S201 refers to S101 described above, and is not described herein again.
S202, acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the aligned image.
And aligning the depth image with the image, and acquiring a depth image area of the first characteristic area in the depth image according to the aligned depth image and the aligned image. The image and depth image alignment method includes one or more of an orb (organized FAST and Binary Robust Independent element Features) feature extraction method, a SURF (speed-Up Robust Features) algorithm and the like, a feature matching algorithm, a homography matrix calculation and a picture torsion algorithm, and the like.
Fig. 5 is a schematic diagram of alignment between an image and a depth image according to an embodiment of the present application, where an upper portion of fig. 5 shows that an image including a first feature region is aligned with a depth image, and a depth image region corresponding to the first feature region, that is, a depth image region 201, is obtained according to the aligned depth image and image.
And S203, acquiring feature depth information corresponding to the first feature region according to the depth image region corresponding to the first feature region.
In one embodiment, according to a depth image area corresponding to a first feature area, performing average calculation on depth values corresponding to the depth image area to obtain average depth information corresponding to the first feature area; and obtaining feature depth information corresponding to the first feature region according to the average depth information.
For example, as shown in fig. 5, the depth image area 201 includes N pixel points, each pixel point corresponds to a depth value X, and the N depth values are averaged to obtain an average depth value corresponding to the first feature area 201
Figure BDA0003375735550000081
In another embodiment, according to a depth image area corresponding to the first feature area, performing weighted calculation on a depth value corresponding to the depth image area to obtain depth information corresponding to the first feature area; and obtaining feature depth information corresponding to the first feature region according to the depth information. For example, the weight is negatively correlated with the distance value between the target pixel point and the center pixel point in the first characteristic region, and the farther the distance value between the target pixel point and the center pixel point is, the smaller the weight is given to the depth value corresponding to the target pixel point during calculation. The application also comprises any other method for weighting calculation.
In one embodiment, feature depth information corresponding to the first feature region is obtained according to the average depth information, including; and performing compensation calculation on the depth value corresponding to the average depth information to obtain the characteristic depth information corresponding to the first characteristic area. The compensation calculation is used for performing range compensation on the depth value corresponding to the feature depth information corresponding to the first feature area.
For example, as shown in fig. 5, an average depth value D of a depth image region 201 corresponding to a first feature region is obtained, a range compensation calculation of ± 10% is performed on the average depth value D (it is understood that a compensation value is merely an example), the depth feature information corresponding to the first feature region is obtained as a depth value range [0.9D, 1.1D ], an area in the depth image where the depth value does not belong to the depth value range is marked as null, and a depth image schematic diagram as shown in fig. 6 is obtained, where fig. 6 is a depth image schematic diagram including depth feature information provided by this embodiment of the present application, and includes a second feature region 202 corresponding to the depth feature information after the compensation calculation is performed.
In the application, the feature depth information corresponding to the first feature region is obtained according to the average depth information, and the depth value corresponding to the average depth information is subjected to compensation calculation to obtain the feature depth information corresponding to the first feature region, so that the depth feature information corresponding to the first feature region is more accurate and reliable.
And S204, correcting the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
S204 refers to S103 described above, and is not described herein again.
According to the method and the device, the second characteristic region segmented from the image is corrected according to the characteristic depth information of the first characteristic region, namely, the depth information provides reference for image segmentation, so that the more accurate second characteristic region is obtained, errors and instability caused by only adopting a single algorithm of the depth information or the segmentation algorithm in the related technology are improved, the influence of wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
In one embodiment, as shown in fig. 7, the flowchart of an image processing method provided by the embodiment of the present application may be implemented by relying on a computer program and may be run on an image processing apparatus based on a von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application.
S301, determining a first characteristic region in the image, and acquiring a depth image of the image and a segmentation image of a second characteristic region in the image.
S301 refers to S101 described above, and is not described herein again.
And S302, acquiring feature depth information corresponding to the first feature region according to the depth image.
S302 refers to S202 and S203 described above, and is not described herein again.
S303A, determining a first candidate region in the segmented image with the confidence level in the first confidence level range.
The segmented image comprises a second feature region and at least one other region, each region corresponding to a confidence belonging to the second feature region. Wherein, the distribution of the confidence coefficient accords with a normal distribution model, and the confidence coefficient is divided into three ranges, namely a first confidence coefficient range (0, T)1]Second confidence level range (T)1,T2) And a third confidence range [ T2,1],T1And T2Is any value set according to the requirement.
In the application, when the second characteristic region in the segmented image is corrected, a region with the confidence coefficient of 0 or approaching to 0 is not considered, and a region with the confidence coefficient in a third confidence coefficient range is corrected into the second characteristic region, or whether the region belongs to the second characteristic region is corrected and calculated according to other related image segmentation algorithms and image segmentation models.
Fig. 8 is a schematic diagram illustrating a second feature region being modified according to depth feature information according to an embodiment of the present application. The segmented image shown in the upper half of fig. 8 includes a second feature region 102, and a first region 103, a second region 104, a third region 105, and a fourth region 106. The confidence that the first region 103 belongs to the second feature region 102 is 0.5, the confidence that the second region 104 belongs to the second feature region 102 is 0.6, the confidence that the third region 105 belongs to the third feature region is 0.2, the confidence that the fourth region 106 belongs to the fourth feature region is 0, and the fourth region 106 is determined as the background region.
Determining a first candidate region with a confidence degree in a first confidence degree range in the segmentation image, wherein the first confidence degree range is (0, T)1]Wherein, T1Is a certain value set according to the requirements. For example, the first confidence range is (0, 0.3)]In the segmented image shown in the upper half of fig. 8, the first candidate region is the third region 105 with a confidence of 0.2. It is understood that the first candidate region may be a single pixel or a set of pixels, and the divided regions shown in this application are only examples.
S304A, determining a first target area belonging to a second feature area in the segmentation map according to the feature depth information corresponding to the first feature area and the first candidate area.
According to the feature depth information corresponding to the first feature region and the first candidate region, a first target region belonging to the second feature region in the first candidate region is determined, where the condition of the first target region is that the corresponding depth image region is included in the region determined by the feature depth information, that is, the condition of the first target region is that the corresponding depth value belongs to the depth value range in the feature depth information corresponding to the first feature region.
For example, the third region 105, which is the first candidate region in the segmented image shown in the upper half of fig. 8, is included in the region 301 determined by the feature depth information corresponding to the first feature region in the depth image, so that the third region 105 belongs to the first target region.
And S303B, determining a second candidate region with the confidence degree in the segmentation image in a second confidence degree range.
And obtaining a first target region in the first candidate region with the confidence coefficient in the first confidence coefficient range in the segmentation image according to the steps S303A and S304A, and obtaining a second target region in the second candidate region with the confidence coefficient in the second confidence coefficient range according to the steps S303B, S304B and S305B. It will be appreciated that the present application also encompasses other fusion algorithms for determining the second target region, of which the following is only one possible implementation.
Determining a second candidate region with confidence level in the segmented image in a second confidence level range, wherein the second confidence level range is (T)1,T2) Wherein, T1And T2Is any value set according to the requirement. For example, the second confidence range is (0.3, 0.7)]In the segmented image shown in the upper half of fig. 8, the second candidate regions are the first region 103 with a confidence of 0.5 and the second region 104 with a confidence of 0.6. It is to be understood that the second candidate region may be a single pixel or a set of pixels, and the divided region shown in this application is only an example.
S304B, obtaining the feature depth information corresponding to the second candidate region according to the depth image, and recalculating the confidence of the second candidate region according to the feature depth information corresponding to the second candidate region.
And recalculating the confidence coefficient of the second candidate region according to the feature depth information corresponding to the second candidate region, calculating and fusing the confidence coefficient value and the corresponding depth value of the second candidate region, performing fusion calculation on the confidence coefficient value and the depth value, and recalculating to obtain the confidence coefficient of the second candidate region.
For example, an absolute error α between the average depth value of the region 202 and the face region in the top half depth image of fig. 8 is calculated, that is, the absolute error α between the depth value of the target region in the region 202 and the average depth value corresponding to the face region is calculated; respectively fusing alpha to a first region 103 and a second region 104 included in the second candidate region for calculation, wherein the calculation formula comprises the following formula;
S(x)=(1-α)×M(x)+α×P(x);
wherein, m (x) indicates whether the region x in the second candidate region belongs to a second feature region determined by the feature depth information in the depth image, if yes, m (x) is 1, and if no, m (x) is 0; p (x) represents the confidence of region x in the second candidate region.
In other words, according to the error between the depth value corresponding to each region in the second candidate region and the average depth value of the face region (i.e. the first feature region), different weights are given to the confidence of each region in the second candidate region, and the larger the error is, the smaller weight is given to the confidence, and vice versa.
For example, the second candidate region is a first region 103 with a confidence of 0.5 and a second region 104 with a confidence of 0.6, the confidence corresponding to the first region 103 is recalculated to be 0.6 according to the depth value corresponding to the first region 103, and the confidence corresponding to the second region 104 is recalculated to be 0.7 according to the depth value corresponding to the second region 104.
It should be understood that the above-mentioned fusion calculation method is only an example of the fusion calculation method for performing fusion calculation on the confidence value and the depth value and then recalculating the confidence of the second candidate region, and the application also includes any other fusion calculation method.
S305B, determining a second target region belonging to the second feature region in the segmented image according to the confidence of the second candidate region after recalculation.
And according to the confidence degree of the second candidate region after recalculation, determining the region with the confidence degree higher than the correction threshold value as a second target region belonging to the second feature region. For example, the second candidate region is a first region 103 with a confidence of 0.5 and a second region 104 with a confidence of 0.6, the confidence corresponding to the first region 103 is recalculated to be 0.6 according to the depth value corresponding to the first region 103, and the confidence corresponding to the second region 104 is recalculated to be 0.7 according to the depth value corresponding to the second region 104; if a region with a confidence higher than 0.55 is determined as the second target region, the first region 103 and the second region 104 are determined as the second target region.
S306, correcting the second characteristic region according to the target region.
In one embodiment, the second feature region is modified based on the first target region and the second target region. In another embodiment, the target area comprises only a first target area, or the target area comprises only a second target area, and the second feature area is modified according to the target area.
And correcting the second characteristic region according to the target region, wherein the correction comprises the following steps: the target region is fused to the second feature region. As shown in fig. 8, the fourth region 104 as the first target region, and the first region 103 and the second region 104 as the second target region are fused to the second feature region 105, to obtain a second feature region 302 shown in the lower half of fig. 8.
In the method and the device, the candidate regions with the confidence degrees in different confidence degree ranges in the segmentation image are respectively calculated, the confidence degrees of the candidate regions are calculated by fusing the depth information of the depth image, and the accuracy of correcting the second characteristic region in the segmentation image is further improved.
According to the method and the device, the second characteristic region segmented from the image is corrected according to the characteristic depth information of the first characteristic region, namely, the depth information provides reference for image segmentation, so that the more accurate second characteristic region is obtained, errors and instability caused by only adopting a single algorithm of the depth information or the segmentation algorithm in the related technology are improved, the influence of wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
In one embodiment, as shown in fig. 9, a flowchart of an image processing method provided by the embodiments of the present application may be implemented by relying on a computer program and may be run on an image processing apparatus based on a von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application.
Specifically, the image processing method includes:
s401, according to the first characteristic region in the image, obtaining a depth image of the image and a segmentation image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region.
S401 refers to S101 described above, and is not described herein again.
S402, acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the aligned image.
S402 refers to S202 described above, and is not described herein again.
And S403, acquiring feature depth information corresponding to the first feature region according to the depth image region corresponding to the first feature region.
According to the depth image area corresponding to the first characteristic area, carrying out average calculation on the depth values corresponding to the depth image area to obtain average depth information corresponding to the first characteristic area; and performing compensation calculation on the depth value corresponding to the average depth information to obtain the characteristic depth information corresponding to the first characteristic area.
S403 refers to S203 described above, and is not described herein again.
S404A, the first candidate region with the confidence degree in the segmentation image in the first confidence degree range is determined.
S404A is referred to above as S303A and will not be described herein.
S405A, determining a first target area belonging to a second feature area in the segmentation map according to the feature depth information corresponding to the first feature area and the first candidate area.
S405A is referred to above as S304A and will not be described herein.
S404B, a second candidate region with the confidence degree in the segmentation image in a second confidence degree range is determined.
S404B is referred to above as S303B and will not be described herein.
S405B, obtaining feature depth information corresponding to the second candidate region according to the depth image, and recalculating the confidence of the second candidate region according to the feature depth information corresponding to the second candidate region.
S405B is referred to above as S304B and will not be described herein.
S406B, determining a second target region belonging to the second feature region in the segmented image according to the confidence of the second candidate region after recalculation.
S406B is referred to above as S305B and will not be described herein.
And S407, correcting the second characteristic region according to the target region.
S407 refers to S306 described above, and is not described herein again.
According to the method and the device, the second characteristic region segmented from the image is corrected according to the characteristic depth information of the first characteristic region, namely, the depth information provides reference for image segmentation, so that the more accurate second characteristic region is obtained, errors and instability caused by only adopting a single algorithm of the depth information or the segmentation algorithm in the related technology are improved, the influence of wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 10, a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present application is shown. The image processing apparatus may be implemented as all or a part of an apparatus by software, hardware, or a combination of both. The image processing apparatus includes an image acquisition module 1001, an information acquisition module 1002, and an image correction module 1003.
An image obtaining module 1001, configured to determine a first feature region in an image, and obtain a depth image of the image and a segmented image of a second feature region in the image, where the second feature region includes the first feature region;
an information obtaining module 1002, configured to obtain feature depth information corresponding to the first feature region according to the depth image;
an image modification module 1003, configured to modify the second feature region in the segmented image according to the feature depth information corresponding to the first feature region.
In one embodiment, the image modification module 1003 comprises:
the target determining unit is used for determining a target area belonging to the second characteristic area in the segmentation image according to the characteristic depth information corresponding to the first characteristic area;
and the target correction unit is used for correcting the second characteristic region according to the target region.
In one embodiment, the target determination unit comprises:
the first candidate subunit is used for determining a first candidate region with the confidence coefficient in the segmentation image in a first confidence coefficient range;
and the first target subunit is used for determining a first target area belonging to the second feature area in the segmentation image according to the feature depth information corresponding to the first feature area and the first candidate area.
In one embodiment, the target determination unit comprises:
the second candidate subunit is used for determining a second candidate region with the confidence coefficient in the segmentation image in a second confidence coefficient range;
the second calculating subunit is configured to obtain feature depth information corresponding to the second candidate region according to the depth image, and recalculate the confidence of the second candidate region according to the feature depth information corresponding to the second candidate region;
a second target subunit, configured to determine, according to the confidence of the second candidate region after the recalculation, a second target region in the segmented image, where the second target region belongs to the second feature region.
In one embodiment, the second calculation subunit is specifically configured to:
determining weight information of the feature depth information corresponding to the second candidate region according to the feature depth information corresponding to the first feature region;
and recalculating the confidence of the second candidate region according to the weight information of the feature depth information corresponding to the second candidate region.
In one embodiment, the target modification unit is specifically configured to:
fusing the target region to the second feature region.
In one embodiment, information acquisition module 1002 includes:
the alignment unit is used for acquiring a depth image area corresponding to the first characteristic area according to the depth image and the image after alignment;
and the obtaining unit is used for obtaining the characteristic depth information corresponding to the first characteristic region according to the depth image region corresponding to the first characteristic region.
In one embodiment, the obtaining unit includes:
an average calculating subunit, configured to perform average calculation on depth values corresponding to the depth image area according to the depth image area corresponding to the first feature area, so as to obtain average depth information corresponding to the first feature area;
and the feature obtaining subunit is configured to obtain feature depth information corresponding to the first feature region according to the average depth information.
In one embodiment, the feature obtaining subunit is specifically configured to:
and performing compensation calculation on the depth value corresponding to the average depth information to obtain the feature depth information corresponding to the first feature region.
According to the method and the device, the second characteristic region segmented from the image is corrected according to the characteristic depth information of the first characteristic region, namely, the depth information provides reference for image segmentation, so that the more accurate second characteristic region is obtained, errors and instability caused by only adopting a single algorithm of the depth information or the segmentation algorithm in the related technology are improved, the influence of wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
It should be noted that, when the image processing apparatus provided in the foregoing embodiment executes the image processing method, only the division of the functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the image processing apparatus and the image processing method provided by the above embodiments belong to the same concept, and details of implementation processes thereof are referred to in the method embodiments and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the image processing method according to the embodiment shown in fig. 1 to 9, and a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 9, which is not described herein again.
The present application further provides a computer program product, where at least one instruction is stored, and the at least one instruction is loaded by the processor and executes the image processing method according to the embodiment shown in fig. 1 to 9, where a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 9, and is not described herein again.
Please refer to fig. 11, which is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 11, the electronic device 1100 may include: a processor 1101, a network interface 1104, a user interface 1103, a memory 1105, a communication bus 1102.
Wherein a communication bus 1102 is used to enable connective communication between these components.
The user interface 1103 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1103 may also include a standard wired interface and a wireless interface.
The network interface 1104 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Processor 1101 may include one or more processing cores, among other things. The processor 1101 connects various portions throughout the server 1100 using various interfaces and lines, and performs various functions of the server 1100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1105 and calling data stored in the memory 1105. Optionally, the processor 1101 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1101 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1101, but may be implemented by a single chip.
The Memory 1105 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1105 includes non-transitory computer-readable storage media. The memory 1105 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1105 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for functions (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1105 may alternatively be a storage device located remotely from the processor 1101 as previously described. As shown in fig. 11, a memory 1105, which is a type of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an image modification application program.
In the electronic device 1100 shown in fig. 11, the user interface 1103 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the processor 1101 may be configured to invoke the image processing application stored in the memory 1105 and specifically perform the following operations:
determining a first characteristic region in an image, and acquiring a depth image of the image and a segmentation image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region;
acquiring feature depth information corresponding to the first feature region according to the depth image;
and correcting the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
In an embodiment, the processor 1101 performs the correction on the second feature region in the segmented image according to the feature depth information corresponding to the first feature region, specifically performing:
determining a target area belonging to the second characteristic area in the segmentation image according to the characteristic depth information corresponding to the first characteristic area;
and correcting the second characteristic region according to the target region.
In an embodiment, the processor 1101 determines, according to the feature depth information corresponding to the first feature region, a target region belonging to the second feature region in the segmented image, and specifically performs:
determining a first candidate region with a confidence degree in the segmentation image in a first confidence degree range;
and determining a first target region belonging to the second feature region in the segmented image according to the feature depth information corresponding to the first feature region and the first candidate region.
In an embodiment, the processor 1101 determines, according to the feature depth information corresponding to the first feature region, a target region belonging to the second feature region in the segmented image, and specifically performs:
determining a second candidate region with confidence level in the segmentation image in a second confidence level range;
acquiring feature depth information corresponding to the second candidate region according to the depth image, and recalculating the confidence coefficient of the second candidate region according to the feature depth information corresponding to the second candidate region;
and determining a second target region belonging to the second characteristic region in the segmentation image according to the confidence degree of the second candidate region after recalculation.
In an embodiment, the processor 1101 recalculates the confidence level of the second candidate region according to the feature depth information corresponding to the second candidate region, specifically performing:
determining weight information of the feature depth information corresponding to the second candidate region according to the feature depth information corresponding to the first feature region;
and recalculating the confidence of the second candidate region according to the weight information of the feature depth information corresponding to the second candidate region.
In an embodiment, the processor 1101 corrects the second feature region according to the target region, specifically performing:
fusing the target region to the second feature region.
In an embodiment, the processor 1101, obtaining feature depth information corresponding to the first feature region according to the depth image, specifically performs:
acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the aligned image;
and acquiring feature depth information corresponding to the first feature region according to the depth image region corresponding to the first feature region.
In an embodiment, the processor 1101 obtains feature depth information corresponding to the first feature region according to the depth image region corresponding to the first feature region, and specifically performs:
according to the depth image area corresponding to the first characteristic area, carrying out average calculation on the depth values corresponding to the depth image area to obtain average depth information corresponding to the first characteristic area;
and obtaining feature depth information corresponding to the first feature region according to the average depth information.
In an embodiment, the processor 1101 obtains, specifically executes, the feature depth information corresponding to the first feature region according to the average depth information;
and performing compensation calculation on the depth value corresponding to the average depth information to obtain the feature depth information corresponding to the first feature region.
According to the method and the device, the second characteristic region segmented from the image is corrected according to the characteristic depth information of the first characteristic region, namely, the depth information provides reference for image segmentation, so that the more accurate second characteristic region is obtained, errors and instability caused by only adopting a single algorithm of the depth information or the segmentation algorithm in the related technology are improved, the influence of wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (12)

1. An image processing method, characterized in that the method comprises:
determining a first characteristic region in an image, and acquiring a depth image of the image and a segmentation image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region;
acquiring feature depth information of the first feature region pair according to the depth image;
and correcting the second characteristic region in the segmented image according to the characteristic depth information of the first characteristic region pair.
2. The method of claim 1, wherein the modifying the second feature region in the segmented image according to the feature depth information of the first feature region pair comprises:
determining a target region belonging to the second characteristic region in the segmentation image according to the characteristic depth information of the first characteristic region pair;
and correcting the second characteristic region according to the target region.
3. The method according to claim 2, wherein the determining a target region belonging to the second feature region in the segmented image according to the feature depth information of the first feature region pair comprises:
determining a first candidate region with a confidence degree in the segmentation image in a first confidence degree range;
and determining a first target region belonging to the second feature region in the segmentation image according to the feature depth information of the first feature region pair and the first candidate region.
4. The method according to claim 2 or 3, wherein the determining a target region belonging to the second feature region in the segmented image according to the feature depth information of the first feature region pair comprises:
determining a second candidate region with confidence level in the segmentation image in a second confidence level range;
acquiring feature depth information of the second candidate region pair according to the depth image, and recalculating the confidence of the second candidate region according to the feature depth information of the second candidate region pair;
and determining a second target region belonging to the second characteristic region in the segmentation image according to the confidence degree of the second candidate region after recalculation.
5. The method of claim 4, wherein recalculating the confidence level of the second candidate region according to the feature depth information of the second candidate region pair comprises:
determining the weight information of the feature depth information of the second candidate region pair according to the feature depth information of the first feature region pair;
and recalculating the confidence coefficient of the second candidate region according to the weight information of the feature depth information of the second candidate region pair.
6. The method of claim 2, wherein the modifying the second feature region according to the target region comprises:
fusing the target region to the second feature region.
7. The method of claim 1, wherein the obtaining feature depth information for the first feature region pair from the depth image comprises:
acquiring depth image regions of the first feature region pair according to the depth images and the images after the pair;
and acquiring the feature depth information of the first feature region pair according to the depth image regions of the first feature region pair.
8. The method of claim 7, wherein obtaining feature depth information for the first pair of feature regions from the depth image regions of the first pair of feature regions comprises:
according to the depth image areas of the first feature area pair, carrying out average calculation on the depth values of the depth image area pair to obtain average depth information of the first feature area pair;
and obtaining feature depth information of the first feature region pair according to the average depth information.
9. The method of claim 8, wherein said deriving feature depth information for the first feature region pair from the mean depth information comprises;
and performing complementary calculation on the depth value of the average depth information pair to obtain the feature depth information of the first feature region pair.
10. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for determining a first characteristic region in an image, and acquiring a depth image of the image and a segmentation image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region;
the information acquisition module is used for acquiring the feature depth information of the first feature region pair according to the depth image;
and the image correction module is used for correcting the second characteristic region in the segmented image according to the characteristic depth information of the first characteristic region pair.
11. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 9.
12. An electronic appliance, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 9.
CN202111417587.2A 2021-11-25 2021-11-25 Image processing method, image processing device, storage medium and electronic equipment Pending CN114240963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111417587.2A CN114240963A (en) 2021-11-25 2021-11-25 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111417587.2A CN114240963A (en) 2021-11-25 2021-11-25 Image processing method, image processing device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114240963A true CN114240963A (en) 2022-03-25

Family

ID=80751608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111417587.2A Pending CN114240963A (en) 2021-11-25 2021-11-25 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114240963A (en)

Similar Documents

Publication Publication Date Title
KR102319177B1 (en) Method and apparatus, equipment, and storage medium for determining object pose in an image
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
WO2020192568A1 (en) Facial image generation method and apparatus, device and storage medium
WO2022156640A1 (en) Gaze correction method and apparatus for image, electronic device, computer-readable storage medium, and computer program product
US8644551B2 (en) Systems and methods for tracking natural planar shapes for augmented reality applications
CN110363817B (en) Target pose estimation method, electronic device, and medium
JP5527213B2 (en) Image orientation determination apparatus, image orientation determination method, and image orientation determination program
CN109389555B (en) Panoramic image splicing method and device
WO2019169884A1 (en) Image saliency detection method and device based on depth information
CN109711268B (en) Face image screening method and device
CN110889824A (en) Sample generation method and device, electronic equipment and computer readable storage medium
CN112836625A (en) Face living body detection method and device and electronic equipment
CN111008935A (en) Face image enhancement method, device, system and storage medium
CN116843834A (en) Three-dimensional face reconstruction and six-degree-of-freedom pose estimation method, device and equipment
CN115171199A (en) Image processing method, image processing device, computer equipment and storage medium
CN114627244A (en) Three-dimensional reconstruction method and device, electronic equipment and computer readable medium
CN113592706B (en) Method and device for adjusting homography matrix parameters
CN113240656B (en) Visual positioning method and related device and equipment
CN111353325A (en) Key point detection model training method and device
CN109785367B (en) Method and device for filtering foreign points in three-dimensional model tracking
CN114399423B (en) Image content removing method, system, medium, device and data processing terminal
CN114240963A (en) Image processing method, image processing device, storage medium and electronic equipment
CN115082992A (en) Face living body detection method and device, electronic equipment and readable storage medium
CN114511877A (en) Behavior recognition method and device, storage medium and terminal
CN113191462A (en) Information acquisition method, image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230802

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Room F, 11/F, Beihai Center, 338 Hennessy Road, Wan Chai District

Applicant before: Sonar sky Information Consulting Co.,Ltd.