CN115379194B - Quantization method and device for depth image, terminal equipment and storage medium - Google Patents

Quantization method and device for depth image, terminal equipment and storage medium Download PDF

Info

Publication number
CN115379194B
CN115379194B CN202110546854.XA CN202110546854A CN115379194B CN 115379194 B CN115379194 B CN 115379194B CN 202110546854 A CN202110546854 A CN 202110546854A CN 115379194 B CN115379194 B CN 115379194B
Authority
CN
China
Prior art keywords
depth
scenes
depth image
image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110546854.XA
Other languages
Chinese (zh)
Other versions
CN115379194A (en
Inventor
张超
朱辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110546854.XA priority Critical patent/CN115379194B/en
Publication of CN115379194A publication Critical patent/CN115379194A/en
Application granted granted Critical
Publication of CN115379194B publication Critical patent/CN115379194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a quantization method and device for a depth image, terminal equipment and a storage medium. The quantization method of the depth image comprises the following steps: acquiring target images under N scenes, wherein N is a positive integer greater than or equal to 2; determining the effective point number corresponding to the depth image contained in the target image under the N scenes; and quantifying imaging of the depth images based on the effective points corresponding to the depth images under the N scenes. The imaging of the quantized depth image is based on the number of effective points in different scenes in N scenes, but not the number of all the depth points in the depth image, so that the influence of different scenes on the imaging of the depth image can be considered, the quantization of the depth image can be more accurate, and the applicable scene of the quantization of the depth image can be enlarged.

Description

Quantization method and device for depth image, terminal equipment and storage medium
Technical Field
The disclosure relates to the technical field of information processing, and in particular relates to a quantization method and device for a depth image, a terminal device and a storage medium.
Background
With the development of terminal equipment, a depth camera becomes an indispensable component of the terminal equipment. The depth camera not only can acquire real-time images, but also can calculate more information (such as depth information) of a target object, and is widely applied to various three-dimensional (three dimensional, 3D) scenes or four-dimensional (4D) scenes of the camera. The quantization scheme of the depth image obtained by the depth camera is to directly adopt all pixel points of the full view field, and the problems of single quantization mode and limited application range exist.
Disclosure of Invention
The disclosure provides a quantization method and device for depth images, terminal equipment and storage media.
According to a first aspect of embodiments of the present disclosure, there is provided a quantization method of a depth image, including:
Acquiring target images under N scenes, wherein N is a positive integer greater than or equal to 2;
Determining the effective point number corresponding to the depth image contained in the target image under the N scenes;
And quantifying imaging of the depth images based on the effective points corresponding to the depth images under the N scenes.
In some embodiments, the target image comprises a color image; the determining the effective point number corresponding to the depth image contained in the target image under the N scenes includes:
determining feature points of the color image of a Kth scene in the N scenes; k is a positive integer greater than or equal to 1 and less than or equal to N;
Determining feature points of the depth image of the Kth scene in the N scenes;
determining the effective point corresponding to the depth image of the Kth scene based on the feature point of the color image of the Kth scene and the feature point of the depth image of the Kth scene until the effective point corresponding to the depth image of the N scenes is determined; the number of valid points corresponding to the depth image of the kth scene is the number of valid points corresponding to the depth image of any one of the N scenes.
In some embodiments, the determining the valid point corresponding to the depth image of the kth scene based on the feature point of the color image of the kth scene and the feature point of the depth image of the kth scene includes:
determining theoretical depth feature points of the Kth scene based on feature points of the color image of the Kth scene and redundancy of feature points between a preset color image and a depth image;
And determining the effective point corresponding to the depth image of the Kth scene based on the theoretical depth feature point of the Kth scene, the feature point of the depth image of the Kth scene and the resolution of the depth image of the Kth scene.
In some embodiments, the determining the feature points of the color image of the kth scene of the N scenes includes:
extracting characteristic points of the color image of the Kth scene, and counting first characteristic points in one grid and second characteristic points in all grids in the color image of the Kth scene;
And determining the feature point of the color image of the Kth scene based on the ratio of the second feature point to the first feature point.
In some embodiments, the determining feature points of the depth image of the kth scene of the N scenes includes:
Extracting characteristic points of the depth image in the Kth scene, and counting third characteristic points in one grid and fourth characteristic points in all grids in the depth image of the Kth scene;
And determining the feature point of the depth image of the Kth scene based on the ratio of the fourth feature point to the third feature point.
In some embodiments, the quantifying imaging of the depth image based on the number of valid points corresponding to the depth image under the N scenes includes:
According to the effective points corresponding to the depth images under the N scenes and the weights occupied by the N scenes, counting the sum of the effective points of the depth images under the N scenes;
Imaging of the depth image is quantified based on the valid point sums.
In some embodiments, the method further comprises:
and determining the weights occupied by the N scenes according to the using frequencies of the N scenes.
According to a second aspect of embodiments of the present disclosure, there is provided a quantization apparatus of a depth image, the apparatus comprising:
The image acquisition module is configured to acquire target images under N scenes, wherein N is a positive integer greater than or equal to 2;
the point determining module is configured to determine the effective points corresponding to the depth images contained in the target images under the N scenes;
And the image quantization module is configured to quantize imaging of the depth images based on the effective points corresponding to the depth images under the N scenes.
In some embodiments, the point determination module includes:
A color feature module configured to determine feature points of the color image of a kth scene of the N scenes; k is a positive integer greater than or equal to 1 and less than or equal to N;
A depth feature module configured to determine feature points of the depth image of the kth scene of the N scenes;
The depth point module is configured to determine the effective point corresponding to the depth image of the Kth scene based on the feature point of the color image of the Kth scene and the feature point of the depth image of the Kth scene until the effective point corresponding to the depth image of the N scenes is determined; the number of valid points corresponding to the depth image of the kth scene is the number of valid points corresponding to the depth image of any one of the N scenes.
In some embodiments, the depth point module is further configured to determine a theoretical depth feature point of the kth scene based on a feature point of the color image of the kth scene and a feature point redundancy between a preset color image and a depth image; and determining the effective point corresponding to the depth image of the Kth scene based on the theoretical depth feature point of the Kth scene, the feature point of the depth image of the Kth scene and the resolution of the depth image of the Kth scene.
In some embodiments, the color feature module is further configured to perform feature point extraction on the color image of the kth scene, and count a first feature point number in one grid and second feature point numbers in all the grids in the color image of the rasterized kth scene; and determining the feature point of the color image of the Kth scene based on the ratio of the second feature point to the first feature point.
In some embodiments, the depth feature module is further configured to perform feature point extraction on the depth image in the kth scene, and count a third feature point number in one grid and a fourth feature point number in all the grids in the depth image of the rasterized kth scene; and determining the feature point of the depth image of the Kth scene based on the ratio of the fourth feature point to the third feature point.
In some embodiments, the image quantization module is further configured to count a sum of the effective points of the depth images under the N scenes according to the effective points corresponding to the depth images under the N scenes and weights occupied by the N scenes; imaging of the depth image is quantified based on the valid point sums.
In some embodiments, the apparatus is further configured to determine weights occupied by the N scenes by using frequencies of the N scenes.
According to a third aspect of embodiments of the present disclosure, there is provided a terminal device, including at least: a processor and a memory for storing executable instructions capable of executing on the processor, wherein:
the processor is configured to execute the executable instructions, when the executable instructions are executed, to perform the steps in the quantization method for a depth image provided in the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement a method of quantifying a depth image as provided in the first aspect described above.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
The embodiment of the disclosure determines the effective points corresponding to the depth images contained in the target images under the N scenes, and quantifies the imaging of the depth images based on the effective points corresponding to the depth images under the N scenes. Thus, the imaging of the quantized depth image in the embodiment of the disclosure is based on the number of effective points in different scenes, but not the number of all depth points in the depth image, so that the influence of different scenes on the imaging of the depth image can be considered, and further, the quantization of the depth image can be more accurate, and the applicable scene of the quantization of the depth image can be enlarged.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of a quantization method of a depth image according to an embodiment of the present disclosure.
Fig. 2 is a schematic view of a scene architecture corresponding to a terminal device defined shooting function according to an embodiment of the present disclosure.
Fig. 3 is a schematic view of a weight architecture corresponding to a scene in which a terminal device defines a photographing function according to an embodiment of the present disclosure.
Fig. 4 is a flow chart of a method for obtaining feature points of a color image according to an embodiment of the present disclosure.
Fig. 5 is a flow chart of a feature point method for acquiring a depth image according to an embodiment of the disclosure.
Fig. 6 is a diagram of a quantization apparatus for a depth image according to an embodiment of the present disclosure.
Fig. 7 is a second quantization apparatus diagram of a depth image according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Fig. 1 is a flowchart of a quantization method of a depth image according to an embodiment of the present disclosure, and as shown in fig. 1, a method applied to a terminal device includes the following steps:
S101, acquiring target images under N scenes, wherein N is a positive integer greater than or equal to 2;
s102, determining the effective points corresponding to the depth images contained in the target images under the N scenes;
s103, imaging of the depth image is quantized based on the effective points corresponding to the depth image under the N scenes.
In the embodiment of the disclosure, the quantization method of the depth image is applied to terminal equipment with a color camera and a depth camera, and is widely applicable to scenes in which the color camera and the depth camera are used simultaneously. For example, an augmented reality (Augmented Reality, AR) scene, a simultaneous localization and mapping (simultaneous localization AND MAPPING, slam) scene, a 3D portrait scene.
The terminal device may be a wearable electronic device and a mobile terminal, where the mobile terminal includes a mobile phone, a notebook, and a tablet computer, and the wearable electronic device includes a smart watch or a smart bracelet, which is not limited in the embodiments of the present disclosure.
In step S101, the terminal device acquires target images in N scenes, including: the terminal equipment adopts a color camera and a depth camera to simultaneously acquire targets of N scenes so as to obtain target images of the N scenes. The target image comprises a color image acquired by a color camera and a depth image acquired by a depth camera. Namely, the terminal device acquiring target images of N scenes comprises: color images of N scenes are acquired and depth images of N scenes are acquired.
Wherein the color image is an image in which pixels are composed of red (R), green (G) and blue (B) components; the depth image is an image in which three-dimensional depth feature information is stored.
The above scenes may include different photographing scenes of the terminal device. The scene may be set based on a terminal device photographing function, and one photographing function may correspond to one scene. For example, a portrait scene may be set based on a terminal device portrait shooting function; setting scenic scenes based on the scenic shooting function of the terminal equipment; setting a motion scene based on a motion shooting function of the terminal equipment; night scenes may be set based on the night shooting function of the terminal device, and embodiments of the present disclosure are not limited.
The scene is different, and the corresponding target image acquired in the scene is also different. For example, a target image acquired by a portrait scene is different from a target image acquired by a landscape scene. For another example, the target image acquired by the sports scene is different from the target image acquired by the nighttime scene.
In step S102, the number of valid points corresponding to the depth image is different for each scene. For example, the number of effective points corresponding to the depth image in the portrait scene is different from the number of effective points corresponding to the target image in the landscape scene. For another example, the number of effective points corresponding to the depth image in the sports scene is different from the number of effective points corresponding to the target image in the night scene.
Determining the effective point number corresponding to the depth images under the N scenes comprises the following steps: determining the effective points corresponding to the depth images under the N scenes based on the feature points of the depth images under the N scenes; may further comprise: the number of valid points corresponding to the depth images in the N scenes is determined based on the number of all depth points in the view angles in the N scenes, and the embodiment of the disclosure is not limited.
In the disclosed embodiments, the number of significant points of a depth image may be used to quantify the imaging of the depth image. And the more the number of effective points corresponding to the depth images in different scenes, the better the imaging of the corresponding depth images.
In step S103, the imaging of the quantized depth image based on the number of effective points corresponding to the depth images under the N scenes may include: based on the sum of the effective points corresponding to the depth images in the N scenes, the imaging of the depth images can be quantized, and the method can further comprise the following steps: and quantizing imaging of the depth image based on the effective points corresponding to the depth image under the N scenes and the weights occupied by the N scenes.
It can be appreciated that the imaging of the quantized depth image according to the embodiment of the disclosure is based on the number of effective points in different scenes, rather than the number of all depth points in the depth image, and the influence of different scenes on the imaging of the depth image can be considered, so that the quantization of the depth image is more accurate, and the applicable scene of the quantization of the depth image can be enlarged.
In some embodiments, the target image comprises a color image; the determining the effective point number corresponding to the depth image contained in the target image under the N scenes includes:
determining feature points of the color image of a Kth scene in the N scenes; k is a positive integer greater than or equal to 1 and less than or equal to N;
Determining feature points of the depth image of the Kth scene in the N scenes;
determining the effective point corresponding to the depth image of the Kth scene based on the feature point of the color image of the Kth scene and the feature point of the depth image of the Kth scene until the effective point corresponding to the depth image of the N scenes is determined; the number of valid points corresponding to the depth image of the kth scene is the number of valid points corresponding to the depth image of any one of the N scenes.
In the embodiment of the present disclosure, the kth scene is any one of N scenes. For example, N is 5, the kth scene may be the 1 st scene of the N scenes, or the kth scene may be the 4 th scene of the N scenes, which embodiments of the present disclosure are not limited.
The feature points of the color image are used for representing the sum of gray scale features of the color image; the feature points of the depth image are used for representing the sum of distance features of the depth image.
It should be noted that, the effective point number corresponding to the depth image of each scene may be determined according to the feature point number of the color image under the scene and the feature point number of the depth image under the scene, that is, the determining method of the effective point number corresponding to the depth image of each scene is the same.
In the embodiment of the disclosure, in the process of determining the effective points corresponding to the depth images of the N scenes, the effective points corresponding to the depth images of the 1 st scene to the effective points corresponding to the depth images of the N scenes can be sequentially obtained, that is, the effective points corresponding to the depth images of all scenes are obtained.
In some embodiments, the determining the feature points of the color image of the kth scene of the N scenes includes:
extracting characteristic points of the color image of the Kth scene, and counting first characteristic points in one grid and second characteristic points in all grids in the color image of the Kth scene;
And determining the feature point of the color image of the Kth scene based on the ratio of the second feature point to the first feature point.
In the embodiment of the present disclosure, feature point extraction is performed on color images in different scenes in N scenes, including: a feature extraction method of a direction gradient histogram (Histogram of Oriented Gradient, HOG) can be adopted for extracting color feature points aiming at a portrait scene; aiming at a still scene, a feature extraction method of Scale-INVARIANT FEATURE TRANSFORM (SIFT) can be adopted to extract color feature points; the characteristic extraction method of Gaussian function difference ((DIFFERENCE OF GAUSSIAN, doG)) can be adopted for the motion scene to extract the color characteristic points.
The color image may be rasterized to obtain a plurality of grids, for example, 4 rows and 4 columns of grids, or 10 rows and 10 columns of grids, which are not limited in the embodiments of the disclosure.
The counting the first feature points in one grid in the rasterized color image includes: and determining a first feature point in one grid based on the resolution of the color image of the Kth scene, the total feature number of the color image of the Kth scene and the effective color feature point obtained by extracting the features in one grid.
Wherein determining a first feature point in a grid based on the resolution of the color image of the kth scene, the total feature number of the color image of the kth scene, and the effective color feature point obtained by extracting features in the grid, comprises: and carrying out product finding on the effective color feature points and the resolution of the color image, and obtaining first feature points based on the ratio of the product finding value to the total feature points of the color image of the Kth scene.
Counting the second feature points in all grids, including: a second feature point is determined based on a product of the grid number and the first feature point.
In an embodiment of the disclosure, different scenes correspond to different second feature points and different first feature points. And obtaining the feature points of the color images of the Kth scene based on the ratio of the second feature points of the Kth scene to the first feature points of the Kth scene, and sequentially obtaining the feature points of the color images of the N scenes according to the same method.
Illustratively, R1 represents a first feature point of the kth scene, N1 represents a second feature point of the kth scene, and the feature point n_ Rgb of the color image of the corresponding kth scene may be determined using formula (1).
N_Rgb=N1/R1 (1)
Thus, when the number of scenes is N, the number of feature points corresponding to the color images in different scenes is n_ Rgb 1 to n_ Rgb N. I.e. the number of feature points of a scene corresponding to a color image.
In some embodiments, the determining feature points of the depth image of the kth scene of the N scenes includes:
Extracting characteristic points of the depth image in the Kth scene, and counting third characteristic points in one grid and fourth characteristic points in all grids in the depth image of the Kth scene;
And determining the feature point of the depth image of the Kth scene based on the ratio of the fourth feature point to the third feature point.
In the embodiment of the present disclosure, feature point extraction is performed on depth images in different scenes in N scenes, including: the depth feature point extraction can be carried out by adopting a feature extraction method of a direction gradient histogram (Histogram of Oriented Gradient, HOG) aiming at the portrait scene; the depth feature point extraction can be carried out by adopting a feature extraction method of Scale-INVARIANT FEATURE TR ANSFORM (SIFT) for a still scene; the depth feature point extraction can be performed by adopting a feature extraction method of Gaussian function difference ((DIFFERENCE OF GAUSSIAN, doG)) aiming at the motion scene.
The feature extraction of the depth image is different from the feature extraction of the color image, which extracts the depth features in the image, and the color image extracts the color features (for example, gray features) in the image.
The above-mentioned rasterizing the depth image may obtain a plurality of grids, for example, 4 rows and 4 columns of grids, or 10 rows and 10 columns of grids may be obtained after rasterizing the depth image, which is not limited in the embodiments of the present disclosure.
The counting the third feature point number in one grid in the rasterized depth image includes: and determining a third feature point in one grid based on the resolution of the depth image of the Kth scene, the total feature number of the depth image of the Kth scene and the effective depth feature point obtained by extracting features in one grid.
The method for determining the third feature point in one grid based on the resolution of the depth image of the Kth scene, the total feature number of the depth image of the Kth scene and the effective depth feature point obtained by extracting features in the grid comprises the following steps: and carrying out product finding on the effective depth feature points and the resolution of the depth image, and obtaining third feature points based on the ratio of the product finding value to the total feature points of the depth image.
Counting the fourth feature points in all grids, including: and determining a fourth feature point based on the product of the grid number obtained by rasterizing the depth image and the third feature point.
In the embodiment of the disclosure, different scenes correspond to different third feature points and different fourth feature points. And obtaining the feature points of the depth image of the Kth scene based on the ratio of the third feature points of the Kth scene to the fourth feature points of the Kth scene, and sequentially obtaining the feature points of the depth images of the N scenes according to the same method.
Illustratively, R2 represents the third feature point of the kth scene, N2 represents the fourth feature point of the kth scene, and the feature point n_depth of the depth image of the corresponding kth scene may be determined using formula (2).
N_depth=N2/R2 (2)
Thus, when the number of scenes is N, the number of feature points corresponding to the depth images in different scenes is n_depth 1 to n_depth N. I.e. the number of feature points of a scene corresponding to a depth image.
In some embodiments, the determining the valid point corresponding to the depth image of the kth scene based on the feature point of the color image of the kth scene and the feature point of the depth image of the kth scene includes:
determining theoretical depth feature points of the Kth scene based on feature points of the color image of the Kth scene and redundancy of feature points between a preset color image and a depth image;
And determining the effective point corresponding to the depth image of the Kth scene based on the theoretical depth feature point of the Kth scene, the feature point of the depth image of the Kth scene and the resolution of the depth image of the Kth scene.
In an embodiment of the present disclosure, determining a theoretical depth feature point of a kth scene based on a feature point of the color image of the kth scene and a feature point redundancy between a preset color image and a depth image includes: and adding 1 to the redundancy of the feature points between the preset color image and the depth image to obtain a summation value, and determining the theoretical depth feature points of the Kth scene based on the ratio of the feature points of the color image of the Kth scene to the summation value.
It should be noted that, the theoretical depth feature point number of the kth scene can be determined by using the formula (3). Wherein n_ Rgb represents the feature point number of the color image of the kth scene, δ represents the feature point redundancy between the preset color image and the depth image, and n_depth' represents the theoretical depth feature point number of the kth scene.
Thus, when the number of scenes is N, the number of theoretical depth feature points corresponding to different scenes is n_depth '1 to n_depth' N. I.e. one scene corresponds to one theoretical depth feature point.
It should be noted that the redundancy of the feature points between the preset color image and the depth image may be set according to the actual situation. In the embodiment of the disclosure, the feature point redundancy may be set to be less than 1 based on the feature point of the color image in the kth scene being greater than the feature point of the depth image, for example, the feature point redundancy may be set to be in the range of 0.3 to 0.8.
In the embodiment of the disclosure, the terminal device may perform normalization processing on the depth image of the kth scene to obtain the resolution of the depth image of the kth scene.
The determining the effective point corresponding to the depth image of the kth scene based on the theoretical depth feature point of the kth scene, the feature point of the depth image of the kth scene and the resolution of the depth image of the kth scene includes: and taking the ratio of the theoretical depth feature points of the Kth scene to the feature points of the depth image of the Kth scene, and obtaining the effective points corresponding to the depth image of the Kth scene based on the product of the ratio and the resolution of the depth image of the Kth scene.
It should be noted that, the effective point number corresponding to the depth image of the kth scene may be determined by using the formula (4). Wherein n_depth' represents a theoretical Depth feature point of the kth scene, depth-size represents a resolution of a Depth image of the kth scene, n_depth represents a feature point of the Depth image of the kth scene, and M K represents a valid point corresponding to the Depth image of the kth scene.
Thus, one scene corresponds to the effective point number corresponding to one depth image, and when the scene number is N, the effective point number corresponding to the depth image of different scenes can be obtained correspondingly and is M 1 to M N.
In some embodiments, the quantifying imaging of the depth image based on the number of valid points corresponding to the depth image under the N scenes includes:
According to the effective points corresponding to the depth images under the N scenes and the weights occupied by the N scenes, counting the sum of the effective points of the depth images under the N scenes;
Imaging of the depth image is quantified based on the valid point sums.
In the embodiment of the disclosure, scenes are different, and weights occupied by the scenes are also different. For example, the weight of the portrait scene may be set to be greater than the weight of the sports scene; or the weight of the scenic scene can be set to be larger than that of the night scene.
In some embodiments, the weights occupied by the N scenes are determined by the frequency of use of the N scenes.
In the embodiment of the disclosure, in the process of determining weights occupied by different scenes in N scenes, the weights occupied by the different scenes may be set according to the use frequency of the different scenes. For example, the frequency of use of the portrait scene is higher than the frequency of use of the sports scene, and the weight occupied by the portrait scene can be set to be larger than the weight occupied by the sports scene.
The weight occupied by different scenes can be set according to the ratio of the using frequencies of the different scenes. For example, the shooting function defined by the terminal device corresponds to a portrait scene and a sports scene, the ratio between the frequency of use of the portrait scene and the frequency of use of the sports scene is 4, the weight occupied by the corresponding settable portrait scene is 0.8, and the weight occupied by the sports scene is 0.2.
It should be noted that the use frequency of different scenes can be obtained according to statistics of images of different scenes shot by a user using a camera of the terminal device. And one scene corresponds to one weight, and the sum of the weights of the N scenes is equal to 1.
According to the effective point numbers corresponding to the depth images under the N scenes and the weights occupied by the N scenes, the statistics of the sum of the effective point numbers of the depth images under the N scenes comprises the following steps: and obtaining the product of the effective point number corresponding to the depth image under each scene and the weight occupied by the scene, summing the products corresponding to each scene in all scenes, and determining the sum value as the sum of the effective point numbers of the depth images under N scenes.
Illustratively, equation (5) may be employed to determine the valid point sum. Wherein, N represents the scene number, K represents any one of the N scenes, M K represents the effective point number corresponding to the depth image of the kth scene, w K represents the weight occupied by different scenes, and Q represents the sum of the effective point numbers.
In the embodiment of the disclosure, the imaging of the depth image is quantized based on the sum of the effective points, wherein the larger the sum of the effective points is, the better the imaging of the corresponding depth image is, and further the imaging performance of the depth image can be quantized better through the quantization method of the depth image.
For a better understanding of the above disclosed embodiments, examples are as follows:
as shown in fig. 2, when defining the scenes corresponding to the shooting functions for different terminal devices, one scene corresponding to one shooting function can be set, so that when the terminal devices adopt the depth cameras and the color cameras to shoot, target images in N scenes can be acquired, and acquisition of the target images is realized.
As shown in fig. 3, when different weights are defined for different scenes of different terminal devices, one scene corresponding to one shooting function may be set, and one scene has one weight, so that weights occupied by different scenes in N scenes may be obtained.
As shown in fig. 4, the target image includes a color image and a depth image, and the number of feature points for obtaining the color image of the kth scene may be as follows: s401, acquiring a color image of a Kth scene; s402, extracting characteristic points of a color image of a Kth scene, and counting first characteristic points in one grid and second characteristic points in all grids in the color image of the Kth scene; s403, determining the feature point of the color image of the Kth scene based on the ratio of the second feature point to the first feature point.
As shown in fig. 5, the following steps may be taken to obtain the feature points of the depth image of the kth scene: s501, acquiring a depth image of a Kth scene; s502, extracting characteristic points of a depth image in a Kth scene, and counting third characteristic points in one grid and fourth characteristic points in all grids in the depth image of the Kth scene; s503, determining the feature point number of the depth image of the Kth scene based on the ratio of the fourth feature point number to the third feature point number.
In the embodiment of the disclosure, after the target images under the N scenes are acquired, the feature points of the color images of the N scenes and the feature points of the depth images of the N scenes may be acquired based on the depth images of the N scenes and the color images of the N scenes; then, based on the feature points of the color images of the N scenes and the feature points of the depth images of the N scenes, obtaining the effective points of the depth images of the N scenes; finally, based on the effective points corresponding to the depth images of the N scenes and the weights occupied by the N scenes, the sum of the effective points of the depth images of the N scenes can be obtained, and then the imaging of the quantized depth images can be realized.
Fig. 6 is a diagram of a quantization apparatus for a depth image according to an exemplary embodiment. Referring to fig. 6, the quantization apparatus of a depth image includes an image acquisition module 1001, a point determination module 1002 and an image quantization module 1003, wherein,
The image acquisition module 1001 is configured to acquire target images under N scenes, where N is a positive integer greater than or equal to 2;
The point determining module 1002 is configured to determine valid points corresponding to depth images included in the target images under the N scenes;
the image quantization module 1003 is configured to quantize imaging of the depth image based on the number of effective points corresponding to the depth image under the N scenes.
In some embodiments, the points determination module 1002 includes:
A color feature module configured to determine feature points of the color image of a kth scene of the N scenes; k is a positive integer greater than or equal to 1 and less than or equal to N;
A depth feature module configured to determine feature points of the depth image of the kth scene of the N scenes;
The depth point module is configured to determine the effective point corresponding to the depth image of the Kth scene based on the feature point of the color image of the Kth scene and the feature point of the depth image of the Kth scene until the effective point corresponding to the depth image of the N scenes is determined; the number of valid points corresponding to the depth image of the kth scene is the number of valid points corresponding to the depth image of any one of the N scenes.
In some embodiments, the depth point module is further configured to determine a theoretical depth feature point of the kth scene based on a feature point of the color image of the kth scene and a feature point redundancy between a preset color image and a depth image; and determining the effective point corresponding to the depth image of the Kth scene based on the theoretical depth feature point of the Kth scene, the feature point of the depth image of the Kth scene and the resolution of the depth image of the Kth scene.
In some embodiments, the color feature module is further configured to perform feature point extraction on the color image of the kth scene, and count a first feature point number in one grid and second feature point numbers in all the grids in the color image of the rasterized kth scene; and determining the feature point of the color image of the Kth scene based on the ratio of the second feature point to the first feature point.
In some embodiments, the depth feature module is further configured to perform feature point extraction on the depth image in the kth scene, and count a third feature point number in one grid and a fourth feature point number in all the grids in the depth image of the rasterized kth scene; and determining the feature point of the depth image of the Kth scene based on the ratio of the fourth feature point to the third feature point.
In some embodiments, the image quantization module is further configured to count a sum of the effective points of the depth images under the N scenes according to the effective points corresponding to the depth images under the N scenes and weights occupied by the N scenes; imaging of the depth image is quantified based on the valid point sums.
In some embodiments, the apparatus is further configured to determine weights occupied by the N scenes by using frequencies of the N scenes.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 7 is a quantization apparatus diagram of a depth image according to an exemplary embodiment. For example, the apparatus may be a terminal device including a mobile phone, a mobile computer, a mobile notebook, or the like.
Referring to fig. 7, the apparatus may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the device, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the device. Examples of such data include instructions for any application or method operating on the device, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 806 provides power to the various components of the device. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for devices.
The multimedia component 808 includes a screen between the device and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense the boundary of a touch or slide action and also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the device. For example, the sensor assembly 814 may detect an on/off state of the device, a relative positioning of the assemblies, such as a display and keypad of the device, the sensor assembly 814 may also detect a change in position of the device or one of the assemblies of the device, the presence or absence of user contact with the device, a change in device orientation or acceleration/deceleration, and a change in temperature of the device. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus and other devices in a wired or wireless manner. The device may access a wireless network based on a communication standard, such as Wi-Fi,4G, or 5G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of the apparatus to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A non-transitory computer readable storage medium, which when executed by a processor of a terminal device, causes the terminal device to perform a method of quantization of depth images, the method comprising:
Acquiring target images under N scenes, wherein N is a positive integer greater than or equal to 2;
Determining the effective point number corresponding to the depth image contained in the target image under the N scenes;
And quantifying imaging of the depth images based on the effective points corresponding to the depth images under the N scenes.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method of quantization of a depth image, the method comprising:
Acquiring target images under N scenes, wherein N is a positive integer greater than or equal to 2;
determining the effective points corresponding to the depth images under the N scenes based on the feature points of the depth images contained in the target images under the N scenes;
Quantifying imaging of the depth images based on the number of effective points corresponding to the depth images under the N scenes;
the target image comprises a color image; the determining the effective point corresponding to the depth image in the N scenes based on the feature point of the depth image included in the target image in the N scenes includes:
determining feature points of the color image of a Kth scene in the N scenes; k is a positive integer greater than or equal to 1 and less than or equal to N;
Determining feature points of the depth image of the Kth scene in the N scenes;
Determining the effective point corresponding to the depth image of the Kth scene based on the feature point of the color image of the Kth scene and the feature point of the depth image of the Kth scene until the effective point corresponding to the depth image of the N scenes is determined; the effective point number corresponding to the depth image of the kth scene is the effective point number corresponding to the depth image of any one of the N scenes;
The determining the effective point corresponding to the depth image of the kth scene based on the feature point of the color image of the kth scene and the feature point of the depth image of the kth scene includes:
determining theoretical depth feature points of the Kth scene based on feature points of the color image of the Kth scene and redundancy of feature points between a preset color image and a depth image;
And determining the effective point corresponding to the depth image of the Kth scene based on the theoretical depth feature point of the Kth scene, the feature point of the depth image of the Kth scene and the resolution of the depth image of the Kth scene.
2. The method of claim 1, wherein said determining feature points of the color image of a kth scene of the N scenes comprises:
extracting characteristic points of the color image of the Kth scene, and counting first characteristic points in one grid and second characteristic points in all grids in the color image of the Kth scene;
And determining the feature point of the color image of the Kth scene based on the ratio of the second feature point to the first feature point.
3. The method of claim 1, wherein the determining feature points of the depth image of the kth scene of the N scenes comprises:
Extracting characteristic points of the depth image in the Kth scene, and counting third characteristic points in one grid and fourth characteristic points in all grids in the depth image of the Kth scene;
And determining the feature point of the depth image of the Kth scene based on the ratio of the fourth feature point to the third feature point.
4. A method according to any one of claims 1 to 3, wherein said quantifying imaging of said depth image based on the number of significant points corresponding to said depth image for said N scenes comprises:
According to the effective points corresponding to the depth images under the N scenes and the weights occupied by the N scenes, counting the sum of the effective points of the depth images under the N scenes;
Imaging of the depth image is quantified based on the valid point sums.
5. The method according to claim 4, wherein the method further comprises:
and determining the weights occupied by the N scenes according to the using frequencies of the N scenes.
6. A quantization apparatus of a depth image, the apparatus comprising:
The image acquisition module is configured to acquire target images under N scenes, wherein N is a positive integer greater than or equal to 2;
the point determining module is configured to determine effective points corresponding to the depth images under the N scenes based on the feature points of the depth images contained in the target images under the N scenes;
The image quantization module is configured to quantize imaging of the depth image based on the number of effective points corresponding to the depth image under the N scenes;
The point determining module includes:
The color feature module is configured to determine feature points of a color image of a Kth scene in the N scenes; k is a positive integer greater than or equal to 1 and less than or equal to N;
A depth feature module configured to determine feature points of the depth image of the kth scene of the N scenes;
The depth point module is configured to determine the effective point corresponding to the depth image of the Kth scene based on the feature point of the color image of the Kth scene and the feature point of the depth image of the Kth scene until the effective point corresponding to the depth image of the N scenes is determined; the effective point number corresponding to the depth image of the kth scene is the effective point number corresponding to the depth image of any one of the N scenes;
the depth point number module is further configured to determine a theoretical depth feature point number of the kth scene based on the feature point number of the color image of the kth scene and a feature point redundancy between a preset color image and a depth image; and determining the effective point corresponding to the depth image of the Kth scene based on the theoretical depth feature point of the Kth scene, the feature point of the depth image of the Kth scene and the resolution of the depth image of the Kth scene.
7. The apparatus of claim 6, wherein the color characterization module is further configured to perform feature point extraction on the color image of the kth scene, and to count a first feature point number within one grid and a second feature point number within all the grids in the color image of the kth scene that is rasterized; and determining the feature point of the color image of the Kth scene based on the ratio of the second feature point to the first feature point.
8. The apparatus of claim 6, wherein the depth characterization module is further configured to perform feature point extraction on the depth image in the kth scene, and to count a third feature point number in one grid and a fourth feature point number in all the grids in the depth image of the kth scene that is rasterized; and determining the feature point of the depth image of the Kth scene based on the ratio of the fourth feature point to the third feature point.
9. The apparatus according to any one of claims 6 to 8, wherein the image quantization module is further configured to count a sum of the effective points of the depth images under the N scenes according to the effective points corresponding to the depth images under the N scenes and weights occupied by the N scenes; imaging of the depth image is quantified based on the valid point sums.
10. The apparatus of claim 9, wherein the apparatus is further configured to determine weights occupied by the N scenes by using frequencies of the N scenes.
11. A terminal device, characterized in that it comprises at least: a processor and a memory for storing executable instructions capable of executing on the processor, wherein:
The processor is configured to execute the executable instructions, when the executable instructions are executed, to perform the steps in the method for quantization of depth images provided in any one of the preceding claims 1 to 5.
12. A non-transitory computer readable storage medium having stored therein computer executable instructions that when executed by a processor implement the method of quantifying depth images provided in any of the preceding claims 1 to 5.
CN202110546854.XA 2021-05-19 2021-05-19 Quantization method and device for depth image, terminal equipment and storage medium Active CN115379194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110546854.XA CN115379194B (en) 2021-05-19 2021-05-19 Quantization method and device for depth image, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110546854.XA CN115379194B (en) 2021-05-19 2021-05-19 Quantization method and device for depth image, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115379194A CN115379194A (en) 2022-11-22
CN115379194B true CN115379194B (en) 2024-06-04

Family

ID=84059308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110546854.XA Active CN115379194B (en) 2021-05-19 2021-05-19 Quantization method and device for depth image, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115379194B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914874A (en) * 2014-04-08 2014-07-09 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN108053372A (en) * 2017-12-01 2018-05-18 北京小米移动软件有限公司 The method and apparatus for handling depth image
CN108605097A (en) * 2016-11-03 2018-09-28 华为技术有限公司 Optical imaging method and its device
US10306203B1 (en) * 2016-06-23 2019-05-28 Amazon Technologies, Inc. Adaptive depth sensing of scenes by targeted light projections
CN111340845A (en) * 2020-02-25 2020-06-26 上海黑眸智能科技有限责任公司 Automatic tracking method, system, terminal and medium based on depth vision sensor
CN111486855A (en) * 2020-04-28 2020-08-04 武汉科技大学 Indoor two-dimensional semantic grid map construction method with object navigation points
WO2020192692A1 (en) * 2019-03-25 2020-10-01 华为技术有限公司 Image processing method and related apparatus
CN112102387A (en) * 2020-08-14 2020-12-18 上海西虹桥导航技术有限公司 Depth estimation performance testing method and system based on depth camera

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010025655A1 (en) * 2008-09-02 2010-03-11 华为终端有限公司 3d video communicating means, transmitting apparatus, system and image reconstructing means, system
US8983177B2 (en) * 2013-02-01 2015-03-17 Mitsubishi Electric Research Laboratories, Inc. Method for increasing resolutions of depth images

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914874A (en) * 2014-04-08 2014-07-09 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
US10306203B1 (en) * 2016-06-23 2019-05-28 Amazon Technologies, Inc. Adaptive depth sensing of scenes by targeted light projections
CN108605097A (en) * 2016-11-03 2018-09-28 华为技术有限公司 Optical imaging method and its device
CN108053372A (en) * 2017-12-01 2018-05-18 北京小米移动软件有限公司 The method and apparatus for handling depth image
WO2020192692A1 (en) * 2019-03-25 2020-10-01 华为技术有限公司 Image processing method and related apparatus
CN111340845A (en) * 2020-02-25 2020-06-26 上海黑眸智能科技有限责任公司 Automatic tracking method, system, terminal and medium based on depth vision sensor
CN111486855A (en) * 2020-04-28 2020-08-04 武汉科技大学 Indoor two-dimensional semantic grid map construction method with object navigation points
CN112102387A (en) * 2020-08-14 2020-12-18 上海西虹桥导航技术有限公司 Depth estimation performance testing method and system based on depth camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Runmin Cong,et al.Going From RGB to RGBD Saliency: A Depth-Guided Transformation Model.《IEEE Transactions on Cybernetics ( Volume: 50, Issue: 8, August 2020)》.全文. *
基于RGB-D摄像机的同步定位与建图研究;辛冠希;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;全文 *

Also Published As

Publication number Publication date
CN115379194A (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN109547701B (en) Image shooting method and device, storage medium and electronic equipment
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
CN106778773B (en) Method and device for positioning target object in picture
RU2628494C1 (en) Method and device for generating image filter
CN107967459B (en) Convolution processing method, convolution processing device and storage medium
CN106131441B (en) Photographing method and device and electronic equipment
CN110569822A (en) image processing method and device, electronic equipment and storage medium
CN107025441B (en) Skin color detection method and device
CN104050645B (en) Image processing method and device
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
US11222235B2 (en) Method and apparatus for training image processing model, and storage medium
CN110796012B (en) Image processing method and device, electronic equipment and readable storage medium
CN117616774A (en) Image processing method, device and storage medium
CN109145878B (en) Image extraction method and device
CN113096022A (en) Image blurring processing method and device, storage medium and electronic equipment
CN108154090B (en) Face recognition method and device
CN112288657A (en) Image processing method, image processing apparatus, and storage medium
CN115379194B (en) Quantization method and device for depth image, terminal equipment and storage medium
CN114338956B (en) Image processing method, image processing apparatus, and storage medium
CN114120034A (en) Image classification method and device, electronic equipment and storage medium
CN113469036A (en) Living body detection method and apparatus, electronic device, and storage medium
CN116452837A (en) Feature matching method, device, terminal and storage medium
CN118118782A (en) Image processing method, image processing apparatus, and storage medium
CN116452657A (en) Pose transformation relation determining method, device, terminal and storage medium
CN114943886A (en) Image recognition method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant