CN113301320A - Image information processing method and device and electronic equipment - Google Patents

Image information processing method and device and electronic equipment Download PDF

Info

Publication number
CN113301320A
CN113301320A CN202110376557.5A CN202110376557A CN113301320A CN 113301320 A CN113301320 A CN 113301320A CN 202110376557 A CN202110376557 A CN 202110376557A CN 113301320 A CN113301320 A CN 113301320A
Authority
CN
China
Prior art keywords
depth
image
depth map
target
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110376557.5A
Other languages
Chinese (zh)
Other versions
CN113301320B (en
Inventor
李佐广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110376557.5A priority Critical patent/CN113301320B/en
Publication of CN113301320A publication Critical patent/CN113301320A/en
Application granted granted Critical
Publication of CN113301320B publication Critical patent/CN113301320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image information processing method and device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: the electronic equipment acquires a first image and a second image; the electronic equipment acquires a first depth map corresponding to the first image according to the first image and the second image; the electronic equipment acquires a second depth map; the electronic equipment updates the depth information corresponding to each pixel point in the target depth area in the first depth map into the depth information corresponding to the target depth point in the second depth map so as to obtain a third depth map; the electronic equipment determines a blurring range in the target image according to the third depth map; and the electronic equipment performs blurring processing on the image in the blurring range in the target image to obtain a blurred image.

Description

Image information processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image information processing method and device and electronic equipment.
Background
With the development of electronic technology, shooting modes of electronic devices (e.g., mobile phones) are becoming more and more abundant, wherein blurring shooting (shooting mode in which a shot image can highlight a subject and a background is blurred) is already a common shooting mode.
In the related art, the electronic device may complete blurring shooting through a double-shot technology, that is, a scene is shot simultaneously by using two lenses of the electronic device, after shooting is finished, according to shooting angles of the two lenses and position differences of the same features on the two pictures, the electronic device may obtain depth information corresponding to different features in the pictures, and then determine a blurring range according to the depth information, thereby blurring the shot pictures.
However, when there are other problems such as dark light and shading in the captured scene, it is difficult to distinguish different features in the captured picture, and therefore, the depth information of the features may not be accurately obtained, so that the above-mentioned double-shot technique may not accurately determine the blurring range, and further the blurring effect of the picture is poor.
Disclosure of Invention
The embodiment of the application aims to provide an image information processing method, an image information processing device and electronic equipment, which can solve the problem that when other problems such as dim light, shading and the like exist in a shot scene, due to the fact that different features in a shot picture are difficult to distinguish, depth information of the features may not be accurately acquired, and accordingly the blurring range may not be accurately determined by the double-shot technology, and further the blurring effect of the picture is poor.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image information processing method, including: the method comprises the steps that electronic equipment obtains a first image and a second image, wherein the first image and the second image are images of the same shooting object collected by different cameras of the electronic equipment; the electronic equipment acquires a first depth map corresponding to the first image according to the first image and the second image; the method comprises the steps that electronic equipment obtains a second depth map, the second depth map comprises depth information collected by a multi-point laser of the electronic equipment, the first depth map and the second depth map are depth information of the same object, and the second depth map comprises N depth points; the electronic equipment updates the depth information corresponding to each pixel point in the target depth area in the first depth map into the depth information corresponding to the target depth point in the second depth map to obtain a third depth map, wherein the target depth area is matched with the target depth point; the electronic device determines a blurring range in a target image according to the third depth map, wherein the target image is any one of the following items: a first image, a second image, and a third image obtained from the first image and the second image; and the electronic equipment performs blurring processing on the image in the blurring range in the target image to obtain a blurred image.
In a second aspect, an embodiment of the present application provides an image information processing apparatus, where the apparatus includes an obtaining module, an updating module, and a determining module: the acquisition module acquires a first image and a second image, wherein the first image and the second image are images of the same shooting object acquired by different cameras of the electronic equipment; the acquiring module is further configured to acquire a depth information map corresponding to the first image according to the first image and the second image; the acquiring module is further configured to acquire a second depth map, where the second depth map includes depth information acquired by a multi-spot laser of the electronic device, the first depth map and the second depth map are depth information of the same object, and the second depth map includes N depth points; the updating module is configured to update the depth information corresponding to each pixel point in the target depth area in the first depth map acquired by the acquiring module to the depth information corresponding to the target depth point in the second depth map to obtain a third image, where the target depth area is matched with the target depth point; the determining module is configured to determine a blurring range in the target image according to the third depth map, where the target image is any one of: a first image, a second image, and a third image obtained from the first image and the second image; the processing module is further configured to perform blurring processing on the image in the blurring range in the target image to obtain a blurred image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment of the application, after the electronic device acquires a first image and a second image of the same photographic object acquired by different cameras of the electronic device, the electronic device acquires a first depth map corresponding to the first image according to the first image and the second image, and further acquires a second depth map (the second depth map includes depth information acquired by a multi-point laser, and the first depth map and the second depth map are depth information of the same object), and then the electronic device may update depth information corresponding to each pixel point in a target depth area in the first depth map to depth information corresponding to a target depth point in the second depth map to obtain a third depth map, and finally determine a blurring range in the first image, the second image, or the third image according to the third depth map. Therefore, the depth information in the shot image can be corrected and improved through the low-cost multipoint laser, and then the depth information of each characteristic in the shot image of the electronic equipment can be accurately obtained, so that the depth information of the image can be accurately obtained under the condition of saving cost, the accurate blurring range is determined, and the better and more accurate blurring image is finally obtained.
Drawings
Fig. 1 is a schematic flowchart of an image information processing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an image information processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an image information processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a second schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image information processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The image information processing method provided by the embodiment of the application can be applied to blurring shooting scenes.
For a blurring shooting scene, after shooting the same scene through two cameras at different angles, the electronic device acquires depth information of each feature in one image (namely the image a) according to parallax values of two images (namely the image a and the image B) of the two cameras, determines a blurring range according to the depth information of the image a, and then performs blurring on the image a to obtain an image with a blurring effect. In this process, the electronic device calculates and determines the distance relationship between different objects in the image mainly by recognizing the edge line parameters of the image and the like. However, when a scene shot by the electronic device has sparse texture (e.g., a large-area white wall or an object with the same color), repetitive texture (e.g., a lattice on a lattice shirt), dim light, and occlusion, an error easily occurs in the process of image recognition, resulting in inaccurate calculated depth information and further resulting in a false range determination error, and a final blurred image may have a missing blurring effect or a false blurring effect.
In the embodiment of the application, after acquiring an image a and an image B, an electronic device acquires a depth information map 1 of the image a through a double-shot parallax technology, then performs information acquisition on the same scene through a multi-point laser to obtain a depth information map 2, then updates depth information corresponding to each pixel point in a target depth area in the depth information map 1 into depth information corresponding to a target depth point in the depth information map 2 to obtain a depth information map 3, then determines a blurring range of the image a according to the depth information map 3, and finally performs blurring processing on the image in the blurring range to obtain a blurred image. Therefore, the depth information in the shot image can be corrected and improved through the low-cost multipoint laser, and then the depth information of each characteristic in the shot image of the electronic equipment can be accurately obtained, so that the depth information of the image can be accurately obtained under the condition of saving cost, the accurate blurring range is determined, and the better and more accurate blurring image is finally obtained.
The present embodiment provides an image information processing method, as shown in fig. 1, including the following steps 301 to 306:
step 301: the image information processing apparatus acquires a first image and a second image.
In an embodiment of the present application, the first image and the second image are images of the same shooting object captured by different cameras of an electronic device.
In this embodiment of the application, the electronic device includes at least two cameras, and the first image is an image captured by any one of the cameras on the electronic device. Generally, the first image is a main image captured by a main camera in the electronic device.
For example, the electronic device may acquire the first image and the second image by receiving a photographing input.
The first image and the second image may be images captured at the same time or images captured at different times, which is not limited in the embodiment of the present application.
For example, the object to be photographed may be any object, and the number of objects or persons in the object to be photographed may be one or more, which is not limited in the embodiment of the present application.
Step 302: the image information processing device acquires a first depth map corresponding to the first image from the first image and the second image.
For example, the electronic device may process the first image and the second image according to a stereoscopic imaging principle, so as to obtain depth information of the first image, and further obtain a first depth map of the first image.
In an example, since the shooting angles of the first image and the second image are different, the electronic device needs to first find the corresponding position relationship of the matching object points (i.e. the same object or person) on the first image and the second image, and then calculate the depth information of the first image according to the parallax values of the same object points on the first image and the second image, that is, obtain the first depth map corresponding to the first image.
In this embodiment of the application, the depth information of the first image, that is, the first depth map, may be depth information corresponding to all pixel windows in the first image. For example, the electronics may divide the first image into 9 gamma 9 pixel windows, each having corresponding depth information.
It should be noted that:
firstly, the method comprises the following steps: in order to reduce the amount of calculation when the first image and the second image match the object point, the electronic device needs to adjust the imaging planes of the first image and the second image to be the same plane. That is, as shown in fig. 2 (a), the first image 31 and the second image 32 are not aligned with each other, and the X-Y-Z coordinate axes of the first image 31 are not aligned with each other, as shown in fig. 2 (b), the electronic device needs to align the X-Y-Z coordinate axes of the first image 31 with the X-Y-Z coordinate axes of the second image 32, so as to reduce the amount of calculation for matching object points between the first image 31 and the second image 32.
Secondly, the method comprises the following steps: after finding the corresponding position relationship between the first image and the second image, the electronic device may calculate a parallax value of each matched object point between the first image and the second image, and further calculate a depth value of a pixel window with a pixel point where each object point is located as a center. For example, when the pixel window 1 with the pixel point a on the first image as the center is matched with the pixel window 2 with the pixel point B on the second image as the center, the sum of the absolute values of the parallax values between the pixel point in the pixel window 1 and the pixel point in the pixel window 2 may be used to obtain the parallax value between the pixel window 1 and the pixel window 2.
Thirdly, the method comprises the following steps: the electronic device will discard the disparity value when the confidence of calculating the disparity value between matching pixel windows in the first and second images is below a preset threshold.
Therefore, the electronic equipment can further acquire the depth information of the first image by acquiring the depth value between the first image and the second image, so that the estimated depth information is acquired in advance according to the shot image, the subsequent comparison with the depth value of the image acquired by the multipoint laser is facilitated, and the most accurate depth information in the first image is finally acquired.
Step 303: the image information processing apparatus acquires a second depth map.
In an embodiment of the application, the second depth map includes depth information collected by a multi-spot laser of the electronic device, the first depth map and the second depth map include depth information of the same object, and the second depth map includes N depth spots.
Step 304: and the image information processing device updates the depth information corresponding to each pixel point in the target depth area in the first depth map into the depth information corresponding to the target depth point in the second depth map so as to obtain a third depth map.
In the embodiment of the present application, the target depth area matches the target depth point.
In this embodiment of the application, the electronic device may compare the respective depth information in the first depth map and the second depth map, and adjust the depth information in the first depth map according to a difference between the depth information in the second depth map and the depth information in the first depth map.
It can be understood that the comparison of the depth information of the first depth map and the second depth map corrects and improves the depth information of the first depth map, which is not accurate enough due to occlusion, dim light, sparse problem or dense texture.
In this embodiment, the first depth map and the second depth map may include a plurality of depth regions, and the depth regions may be image windows centered on respective laser points of the multi-point laser by the electronic device. For example, if the multipoint laser includes 81 laser spots distributed in 9 x 9, the second depth map may include depth information of 9 x 9 regions, that is, include depth information of 81 regions, and the second depth map is divided into 81 image windows centered on the 81 laser spots according to the 81 depth information, and correspondingly, the electronic device may divide the first depth map into 81 pixel windows corresponding to the second depth map according to the laser spot distribution of the second depth map, so that the subsequent electronic device performs a fusion process on the depth information of the first depth map and the depth information of the second depth map (for example, the electronic device updates the depth information of the first depth map with the depth information of the second depth map).
Further, the depth information corresponding to each pixel point may include a depth value of the depth area corresponding to the pixel point and a related parameter corresponding to the depth value.
In this embodiment, the target depth area may be all depth areas or a part of depth areas in the plurality of depth areas.
For example, the electronic device may correct the depth information of part or all of the first depth map according to a difference between the depth information of the first depth map and the depth information of the second depth map, and in particular, may replace the depth information of part or the first depth map with the depth information of the second depth map.
In the embodiment of the present application, after the electronic device processes the depth information of the first depth map according to the depth information of the first depth map and the depth information of the second depth map, a third depth map is obtained, where the third depth map is a sparse depth map. It can be understood that, since the electronic device acquires the depth information of the second depth map by using the multi-spot laser, the depth information of the second depth map is relatively sparse compared with the depth information of the image acquired by the dense laser in the related art, and may be referred to as a sparse depth map, after the electronic device processes the depth information of the first depth map by using the depth information of the first depth map and the depth information of the second depth map, the depth information of the third depth map is correspondingly sparse.
In this way, the electronic device can correct an inaccurate part of the depth information of the first depth map by using the depth information of the second depth map, so that more accurate image information can be acquired in the subsequent blurring process of the image.
Step 305: the image information processing device determines a blurring range in the target image based on the third depth map.
In an embodiment of the present application, the target image is any one of: the image processing apparatus includes a first image, a second image, and a third image obtained from the first image and the second image.
In this embodiment of the application, after obtaining the third depth map, the electronic device may obtain a depth value of the focus position according to the focus position of the third depth map, and then obtain a depth range of a clear portion in the third depth map according to the depth value of the focus position, and then, the electronic device may determine the blurring range by determining the blurring radius in combination with the depth range of the clear portion in the third depth map.
For example, in a case that the electronic device obtains the depth value of the focal position, by querying a focal depth value table preset in the electronic device, the depth range of the sharp portion in the third depth map may be obtained through a numerical range identifier, for example, if the minimum depth of the sharp portion is near, and the farthest depth is far, the depth range is [ near, far ].
Illustratively, the blurring radius is a radius of a circle of the blurring or blurring image, and the smaller the blurring radius is, the lighter the blurring degree is, and the larger the blurring radius is, the heavier the blurring degree is.
For example, in the case that the target image is the first image, the blurring radius may be determined by a relationship between a depth (i.e., d) of any pixel point in the first image and a depth range of a sharp portion (e.g., whether d is within a range of near, far and a distance difference between d and near or d and far), an aperture size, and a blurring distance between a foreground and a background of the first image and a corresponding scene. Wherein, the minimum depth value mindist in the first depth map, the farthest or maximum depth value in the first depth map is maxdist, the background _ length in the background blurring value of the first image is, then the background _ length is near-minus, the foreground blurring distance for the ring _ length is maxdist-far, and the maximum blurring radius corresponding to the selected aperture (e.g. F2.0) may be represented by max _ box _ r.
Foreground blurring radius ═ near-d ═ max _ box _ r/background _ length formula 1
Background blurring radius (d-far) max _ bokeh _ r/formed round _ length equation 2
Step 306: the image information processing device performs blurring processing on the image within the blurring range in the target image to obtain a blurred image.
For example, after the blurring range is determined, the blurring process is performed on the image pixels in the blurring range, and the blurring degree is generally lighter when the blurring range is smaller and heavier when the blurring range is larger.
In one example, the smaller the blurring radius, the lighter the blurring degree, and the larger the blurring radius, the heavier the blurring degree. In general, electronic devices often use circular filters to set the blurring radius to perform blurring.
In the image information processing method provided by the embodiment of the application, after the image information processing device acquires the first image and the second image of the same shooting object which are acquired by different cameras of the electronic equipment, the image information processing device acquires a first depth map corresponding to the first image according to the first image and the second image, acquires a second depth map (the second depth map includes depth information acquired by the multipoint laser, and the first depth map and the second depth map are depth information of the same object), then, the image information processing apparatus may update the depth information corresponding to each pixel point in the target depth region in the first depth map to the depth information corresponding to the target depth point in the second depth map, and finally, determining the blurring range in the first image, the second image or the third image according to the third depth map. Therefore, the depth information in the shot image can be corrected and improved through the low-cost multipoint laser, and then the depth information of each characteristic in the shot image of the electronic equipment can be accurately obtained, so that the depth information of the image can be accurately obtained under the condition of saving cost, the accurate blurring range is determined, and the better and more accurate blurring image is finally obtained.
Optionally, in this embodiment of the application, in the step 304, the depth information corresponding to each pixel point in the target depth area in the first depth map is updated to the depth information corresponding to the target depth point in the second depth map, and the image information processing method provided in this embodiment of the application may further include the following steps a1 and a 2:
step A1: and when the number of the depth information in the target depth area exceeds a preset range, the image information processing device updates the depth information corresponding to each pixel point in the target depth area.
Step A2: and when the number of the depth information in the target depth area is within the preset range, the image information processing device keeps the depth information corresponding to each pixel point in the target depth area.
For example, when the electronic device counts the depth information of the target depth region in the first depth map, if the number of the depth information corresponding to the target depth region is greater than a preset number range, for example, the number of the depth information corresponding to the target depth region is less than 5 or more than 15, it may be determined that the target depth region is likely to be a sparse texture or a dim light region (i.e., the corresponding depth information of the target depth region is too little), or a repetitive texture (i.e., the corresponding depth value of the target depth region is too much). In this case, the target depth region may be marked with a missing depth information, that is, the original depth information in the target depth region is not adopted, and then the depth information of the region corresponding to the target depth region in the second depth map is supplemented to the missing depth information in a corresponding manner.
For example, the preset range may be a range value preset by the electronic device, or may also be a range value customized by a user, which is not limited in the embodiment of the present application.
Therefore, the image information processing device can judge whether the number of the depth information in each area is abnormal or not by comparing whether the number of the depth information in each area is within the preset range or not, and then update the depth information in the target depth area of which the number of the depth information is not within the preset range, while the depth information of other areas is kept unchanged, so that the electronic equipment can correct the depth information and save resources consumed by correcting the depth information.
Optionally, in this embodiment of the application, in the step 304, the depth information corresponding to each pixel point in the target depth area in the first depth map is updated to the depth information corresponding to the target depth point in the second depth map, and the image information processing method provided in this embodiment of the application may further include the following steps B1 and B5:
step B1: the image information processing apparatus divides the first depth map into N depth regions.
Step B2: the image information processing apparatus calculates a depth mean value of each of the above-mentioned N depth regions.
Step B3: the image information processing apparatus determines at least two matching regions that match the second depth map among the N depth regions.
Step B4: the image information processing device determines the depth pull-up ratio according to the ratio of the depth mean value of each depth area in the matching area to the depth value of the corresponding depth point in the second depth map.
Step B5: the image information processing apparatus adjusts the depth value of each of the depth areas in accordance with the depth scaling.
For example, the depth region may refer to the foregoing description, and will not be described herein again.
For example, the electronic device can look ahead at the mean and variance of the depth values corresponding to the depth regions of the first depth map, and in the second depth map, a mean and a variance of the depth values corresponding to each depth region of the first depth map, then, according to the depth value of each depth region of the first depth map and the depth value of each depth region of the second depth map, a proportional relationship between the depth value of each depth region of the first depth map and the depth value of each depth region of the first depth map is calculated (for example, the depth value of each depth region of the second depth map/the depth value of each depth region of the first depth map), and finally, the depth information of all pixel points corresponding to the depth value of the first depth map is multiplied by the proportional relationship, that is, the depth information of all pixel points corresponding to the depth value of the first depth map is scaled up or reduced.
Therefore, the electronic equipment can accurately, conveniently and quickly judge the accuracy and the adoption probability of the depth information of the first depth map by setting the preset range of the depth information target parameter of the first depth map, and then quickly and accurately correct the depth information of the first depth map by using the depth information of the second depth map, so that the electronic equipment can determine the blurring range of the image more accurately in the follow-up process, and then perform blurring processing on the image to present a blurring image meeting the requirements of a user.
Optionally, in this embodiment of the present application, in the step B3, the image information processing method provided in this embodiment of the present application may further include the following steps C1 and C2:
step C1: the image information processing apparatus determines, for each depth region, that the depth region does not match the depth point in a case where a variance of depth information in the depth region is greater than a preset threshold.
Step C2: the image information processing apparatus determines that the depth region matches the depth point when a variance of depth information in the depth region is less than or equal to a preset threshold.
For example, the preset threshold value is preset by the electronic device, and may also be set by a user in a self-defined manner. Generally, the preset threshold may be a variance of any feature region in an image pre-stored in the electronic device.
For example, when the electronic device counts the depth values of each depth region of the first depth map, if the variance of the depth values of any depth region in the first depth map is greater than a predetermined threshold, the depth of the depth region may be considered to be too large, and the depth value of the depth region in the first depth map is no longer adjusted by using the depth value of the depth region corresponding to the depth region in the second depth map. Conversely, in the case where the variance of the depth value of any depth region in the first depth map is less than or equal to the predetermined threshold, the depth of the depth region may be considered to be relatively smooth, and the depth value of the depth region in the first depth map can be adjusted using the depth value of the depth region corresponding to the depth region in the second depth map.
It is understood that in the process of adjusting the depth values of the first depth map, the electronic device may adjust the feature region with a smoother depth.
Optionally, in this embodiment of the application, in obtaining the third image in the step 305, the image information processing method provided in this embodiment of the application may include the following steps D1 and D2:
step D1: and the image information processing device processes the third depth map by adopting a dense algorithm to obtain a fourth depth map.
Step D2: the image information processing device determines the blurring range in the target image based on the depth information of the fourth depth map.
It can be understood that, since the third depth map is a sparse depth map, and depth information in the sparse depth map is not rich enough, in order to perfect depth information, the sparse depth map needs to be converted into a dense depth map by a dense algorithm.
Illustratively, the dense algorithm may be a depth filtering method, for example, a guided edge-preserving filtering algorithm.
For example, the dense algorithm may be preset in the electronic device, and may be set by a user according to a self-defined requirement, which is not limited in the embodiment of the present application.
In one example, the electronic device needs to depth filter the depth information of the third depth map in combination with the RGB information. For example, when the third depth map is a human image, the segmentation boundary of the human image can be used as a constraint of dense growth, that is, the segmentation boundary where the human image region is located is set for the electronic device in advance, and then the dense algorithm is more accurately used for the third depth map to obtain the fourth depth map.
Therefore, the fourth depth map is obtained by performing dense processing on the third depth map obtained by the electronic equipment, so that blurring processing can be completed more accurately when blurring processing is performed subsequently, and the detail content of the image can be enriched, so that a user can obtain a blurring image with a better effect.
Optionally, in this embodiment of the present application, before the step 304, the image information processing method provided in this embodiment of the present application may include the following step E:
step E: the image information processing device registers each depth region in the first depth map with a corresponding depth region of the second depth image.
Illustratively, the first depth map is a depth map of an RGB image, and the image information processing device registers each depth region in the first depth map with a corresponding depth region in the second image, which is substantially a process of registering a depth 3D lattice to the RGB image.
For example, after each depth region in the first depth map is registered with a corresponding depth region in the second depth map, the calibrated rotation parameter and translation parameter may be adopted to translate and then rotate the depth 3D lattice.
It is understood that the above translation is to eliminate the parallax between the depth 3D lattice and the RGB image, and the translation may be such that the coordinate system of the depth 3D lattice may be translated to the RGB image on the X-Y-Z coordinate system. The rotation is to change a pixel focal length of the depth 3D dot matrix, so that the pixel focal length of the depth 3D dot matrix matches the RGB image, and perform distortion processing on the depth 3D dot matrix, and generally, the electronic device registers a center point of each depth area in the depth 3D dot matrix with a center point of each depth area in the RGB image, thereby completing the distortion processing on the depth 3D dot matrix.
It should be noted that the rotation parameter and the translation parameter are parameters pre-stored in the electronic device, and generally, when the electronic device leaves a factory, the rotation parameter and the translation parameter may be pre-stored in the electronic device in advance according to a distance between the multi-point laser and the RGB camera corresponding to the first image.
In one example, the above translation process is the electronic device translating the Y and Z directions in a depth 3D lattice. Generally, the X direction is a direction in which the distance between the multipoint laser and the RGB camera for obtaining the first image is the largest, the electronic device may use the X direction as a reference line, and may obtain the parallax offset of the multipoint laser translated to the RGB camera position by using the principle of double-camera triangulation, thereby obtaining the coordinate system corresponding to the multipoint laser.
In an example, the electronic device may project, through re-projection, a point of a depth 3D lattice of the multi-point laser onto the RGB image, specifically, a center point of each depth region of the depth 3D lattice may be registered to a center point of a corresponding depth region in the RGB image, and then, the electronic device may perform distortion processing on the depth 3D lattice according to a registration condition of the RGB image and the depth 3D lattice.
It should be noted that, because one laser point of the multi-point laser corresponds to a pixel point of one RGB image window, in the process of registering the depth 3D lattice, the electronic device needs to calculate the size of the pixel window corresponding to the central point of the depth 3D lattice, so as to accurately complete the registration.
In this way, the electronic device may complete the registration between the first depth map and the second depth map by translating and rotating the second depth map, thereby completing the image blurring process more accurately in the subsequent process.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
It should be noted that, in the image information processing method provided in the embodiment of the present application, the execution subject may be an image information processing apparatus, or a control module in the image information processing apparatus for executing the image information processing method. The image information processing apparatus provided in the embodiment of the present application will be described with an example in which an image information processing apparatus executes an image information processing method.
Fig. 3 is a schematic diagram of a possible structure of an image information processing apparatus according to an embodiment of the present application. As shown in fig. 3, the apparatus includes an obtaining module 601, an updating module 602, a determining module 603, and a processing module 604: the acquiring module 601 acquires a first image and a second image, where the first image and the second image are images of a same shooting object acquired by different cameras of an electronic device; the obtaining module 601 is further configured to obtain a depth information map corresponding to the first image according to the first image and the second image; the obtaining module 601 is further configured to obtain a second depth map, where the second depth map includes depth information acquired by a multi-point laser of the electronic device, the first depth map and the second depth map are depth information of the same object, and the second depth map includes N depth points; the updating module 602 is configured to update the depth information corresponding to each pixel point in the target depth area in the first depth map acquired by the acquiring module 601 into the depth information corresponding to the target depth point in the second depth map, so as to obtain a third image, where the target depth area is matched with the target depth point; the determining module 603 is configured to determine a blurring range in the target image according to the third depth map, where the target image is any one of: a first image, a second image, and a third image obtained from the first image and the second image; the processing module 604 is further configured to perform blurring processing on the image in the blurring range in the target image determined by the determining module 603, so as to obtain a blurred image.
In the image information processing apparatus provided in the embodiment of the application, after acquiring the first image and the second image of the same photographic subject collected by different cameras of the electronic device, the image information processing device acquires a first depth map corresponding to the first image according to the first image and the second image, acquires a second depth map (the second depth map includes depth information acquired by the multipoint laser, and the first depth map and the second depth map are depth information of the same object), then, the image information processing apparatus may update the depth information corresponding to each pixel point in the target depth region in the first depth map to the depth information corresponding to the target depth point in the second depth map, and finally, determining the blurring range in the first image, the second image or the third image according to the third depth map. Therefore, the depth information in the shot image can be corrected and improved through the low-cost multipoint laser, and then the depth information of each characteristic in the shot image of the electronic equipment can be accurately obtained, so that the depth information of the image can be accurately obtained under the condition of saving cost, the accurate blurring range is determined, and the better and more accurate blurring image is finally obtained.
Optionally, in this embodiment of the application, the updating module 602 is specifically configured to update the depth information corresponding to each pixel point in the target depth region when the number of the depth information in the target depth region is less than a preset number; the updating module 602 is further configured to, when the number of depth information in the target depth area is greater than or equal to a preset number, maintain the depth information corresponding to each pixel point in the target depth area.
Optionally, in this embodiment of the present application, the apparatus 600 further includes a dividing module 605, a calculating module 606, and an adjusting module 607; the dividing module 605 is configured to divide the first depth map into N depth regions; the calculating module 606 is configured to calculate a depth mean of each of the N depth regions divided by the dividing module 605; the determining module 603 is specifically configured to determine at least two matching regions that match the second depth map among the N depth regions divided by the dividing module 605; the determining module 603 is specifically configured to determine a depth pull-up ratio according to a ratio of a depth mean value of each depth area in the matching area to a depth value of a corresponding depth point in the second depth map; the adjusting module 607 is configured to adjust the depth value of each depth area according to the depth scaling determined by the determining module 603.
Optionally, in this embodiment of the application, the determining module 603 is specifically configured to determine, for each depth region, that the depth region is not matched with the depth point when a variance of depth information in the depth region is greater than a preset threshold; the determining module 603 is specifically configured to determine that the depth area matches the depth point when the variance of the depth information in the depth area is smaller than or equal to a preset threshold.
Optionally, in this embodiment of the application, the processing module 604 is further configured to process the third depth map by using a dense algorithm to obtain a fourth depth map; the determining module 603 is specifically configured to determine a blurring range in the target image according to the depth information of the fourth depth map processed by the processing module 604.
The image information processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image information processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image information processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to fig. 2, and is not described herein again to avoid repetition.
It should be noted that, as shown in fig. 3, blocks that are necessarily included in the image information processing apparatus 600 are indicated by solid line blocks, such as an acquisition block 601; modules that may or may not be included in the image information processing apparatus 600 are illustrated with a dashed box as the dividing module 605.
Optionally, as shown in fig. 4, an electronic device 800 is further provided in this embodiment of the present application, and includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and executable on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the above-mentioned embodiment of the image information processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110. Wherein the user input unit 107 includes: touch panel 1071 and other input devices 1072, display unit 106 including display panel 1061, input unit 104 including image processor 1041 and microphone 1042, memory 109 may be used to store software programs (e.g., an operating system, application programs needed for at least one function), and various data.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 110 obtains a first image and a second image, where the first image and the second image are images of a same shooting object collected by different cameras of the electronic device; the processor 110 is further configured to obtain a first depth map corresponding to the first image according to the first image and the second image; the processor 110 is further configured to obtain a second depth map, where the second depth map includes depth information acquired by a multi-point laser of the electronic device, the first depth map and the second depth map include depth information of the same object, and the second depth map includes N depth points; the processor 110 is further configured to update depth information corresponding to each pixel point in a target depth area in the first depth map to depth information corresponding to a target depth point in the second depth map to obtain a third depth map, where the target depth area is matched with the target depth point; the processor 110 is further configured to determine a blurring range in a target image according to the third depth map, where the target image is any one of: a first image, a second image, and a third image obtained from the first image and the second image; the processor 110 is further configured to perform blurring processing on the image in the blurring range in the target image to obtain a blurred image.
In the electronic device provided in the embodiment of the application, after the electronic device obtains a first image and a second image of the same photographic object, which are acquired by different cameras of the electronic device, the electronic device obtains a first depth map corresponding to the first image according to the first image and the second image, and also obtains a second depth map (the second depth map includes depth information acquired by a multi-point laser, and the first depth map and the second depth map are depth information of the same object), and then the electronic device may update the depth information corresponding to each pixel point in a target depth area in the first depth map to the depth information corresponding to a target depth point in the second depth map to obtain a third depth map, and finally determine a blurring range in the first image, the second image, or the third image according to the third depth map. Therefore, the depth information in the shot image can be corrected and improved through the low-cost multipoint laser, and then the depth information of each characteristic in the shot image of the electronic equipment can be accurately obtained, so that the depth information of the image can be accurately obtained under the condition of saving cost, the accurate blurring range is determined, and the better and more accurate blurring image is finally obtained.
Optionally, the processor 110 is specifically configured to update the depth information corresponding to each pixel point in the target depth region when the number of depth information in the target depth region exceeds a preset range; the processor 110 is specifically configured to, when the number of depth information in the target depth area is within the preset range, maintain the depth information corresponding to each pixel point in the target depth area.
Optionally, the processor 110 is specifically configured to divide the first depth map into N depth regions; a processor 110, specifically configured to calculate a depth mean value of each of the N depth regions; a processor 110, specifically configured to determine at least two matching regions that match the second depth map in the N depth regions; a processor 110, configured to determine a depth pull-up ratio according to a ratio of a depth mean value of each depth region in the matching region to a depth value of a corresponding depth point in the second depth map; the processor 110 is specifically configured to adjust the depth value of each depth area according to the depth pull-up ratio.
Optionally, the processor 110 is specifically configured to, for each depth region, determine that the depth region is not matched with the depth point when a depth information variance in the depth region is greater than a preset threshold; the processor 110 is specifically configured to determine that the depth area matches the depth point when the variance of the depth information in the depth area is smaller than or equal to a preset threshold.
Optionally, the processor 110 is specifically configured to process the third depth map by using a dense algorithm to obtain a fourth depth map; the processor 110 is specifically configured to determine a blurring range in the target image according to the depth information of the fourth depth map.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image information processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above image information processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An image information processing method, characterized by comprising:
acquiring a first image and a second image, wherein the first image and the second image are images of the same shooting object acquired by different cameras of electronic equipment;
acquiring a first depth map corresponding to the first image according to the first image and the second image;
acquiring a second depth map, wherein the second depth map comprises depth information acquired by a multi-point laser of the electronic equipment, the first depth map and the second depth map comprise depth information of the same object, and the second depth map comprises N depth points;
updating the depth information corresponding to each pixel point in a target depth area in the first depth map into the depth information corresponding to a target depth point in the second depth map to obtain a third depth map, wherein the target depth area is matched with the target depth point;
determining a blurring range in a target image according to the third depth map, wherein the target image is any one of the following items: the first image, the second image and a third image obtained according to the first image and the second image;
and performing blurring processing on the image in the blurring range in the target image to obtain a blurred image.
2. The method according to claim 1, wherein the updating the depth information corresponding to each pixel point in the target depth region in the first depth map to the depth information corresponding to the target depth point in the second depth map comprises:
when the number of the depth information in the target depth area exceeds a preset range, updating the depth information corresponding to each pixel point in the target depth area;
and when the number of the depth information in the target depth area is within the preset range, maintaining the depth information corresponding to each pixel point in the target depth area.
3. The method of claim 1, wherein updating the depth information corresponding to each pixel point in the target depth region in the first depth map to the depth information corresponding to the target depth point in the second depth map comprises:
dividing the first depth map into N depth regions;
calculating a depth mean value of each of the N depth regions;
determining at least two matching regions of the N depth regions that match the second depth map;
determining a depth pull-up ratio according to the ratio of the depth mean value of each depth area in the matching area to the depth value of the corresponding depth point in the second depth map;
and adjusting the depth value of each depth area according to the depth lifting proportion.
4. The method according to claim 3, wherein the determining at least two matching regions of the N depth regions that match the second depth map comprises:
for each depth region, determining that the depth region does not match the depth point if a depth information variance in the depth region is greater than a preset threshold;
determining that the depth region matches the depth point when the variance of the depth information in the depth region is less than or equal to a preset threshold.
5. The method according to claim 1, wherein the determining, according to the third depth map, a blurring range in the target image corresponding to the third depth map specifically includes:
processing the third depth map by adopting a dense algorithm to obtain a fourth depth map;
and determining a blurring range in the target image according to the depth information of the fourth depth map.
6. An image information processing apparatus, characterized in that the apparatus comprises an acquisition module, an update module, a determination module, and a processing module:
the acquisition module acquires a first image and a second image, wherein the first image and the second image are images of the same shooting object acquired by different cameras of the electronic equipment;
the acquisition module is further configured to acquire a first depth map corresponding to the first image according to the first image and the second image;
the acquisition module is further configured to acquire a second depth map, where the second depth map includes depth information acquired by a multi-spot laser of the electronic device, the first depth map and the second depth map are depth information of the same object, and the second depth map includes N depth points;
the updating module is configured to update the depth information corresponding to each pixel point in the target depth area in the first depth map acquired by the acquiring module to the depth information corresponding to the target depth point in the second depth map to obtain a third depth map, where the target depth area is matched with the target depth point;
the determining module is configured to determine a blurring range in a target image according to the third depth map, where the target image is any one of: the first image, the second image and a third image obtained according to the first image and the second image;
the processing module is further configured to perform blurring processing on the image within the blurring range in the target image determined by the determining module to obtain a blurred image.
7. The apparatus of claim 6,
the updating module is specifically configured to update the depth information corresponding to each pixel point in the target depth region when the number of the depth information in the target depth region is less than a preset number;
the updating module is specifically further configured to maintain the depth information corresponding to each pixel point in the target depth region when the number of the depth information in the target depth region is greater than or equal to a preset number.
8. The apparatus of claim 7, further comprising a partitioning module, a calculating module, and an adjusting module;
the dividing module is configured to divide the first depth map into N depth regions;
the calculation module is configured to calculate a depth mean value of each depth region of the N depth regions divided by the division module;
the determining module is specifically configured to determine at least two matching regions that match the second depth map among the N depth regions divided by the dividing module;
the determining module is specifically configured to determine a depth pull-up ratio according to a ratio of a depth mean value of each depth area in the matching area to a depth value of a corresponding depth point in the second depth map;
and the adjusting module is used for adjusting the depth value of each depth area according to the depth pull-up ratio determined by the determining module.
9. The apparatus of claim 8,
the determining module is specifically configured to determine, for each depth region, that the depth region is not matched with the depth point when a depth information variance in the depth region is greater than a preset threshold;
the determining module is specifically configured to determine that the depth region matches the depth point when the depth information variance in the depth region is less than or equal to a preset threshold.
10. The apparatus of claim 6,
the processing module is further configured to process the third depth map by using a dense algorithm to obtain a fourth depth map;
the determining module is specifically configured to determine a blurring range in the target image according to the depth information of the fourth depth map processed by the processing module.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image information processing method according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the image information processing method according to any one of claims 1 to 5.
CN202110376557.5A 2021-04-07 2021-04-07 Image information processing method and device and electronic equipment Active CN113301320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110376557.5A CN113301320B (en) 2021-04-07 2021-04-07 Image information processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110376557.5A CN113301320B (en) 2021-04-07 2021-04-07 Image information processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113301320A true CN113301320A (en) 2021-08-24
CN113301320B CN113301320B (en) 2022-11-04

Family

ID=77319432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110376557.5A Active CN113301320B (en) 2021-04-07 2021-04-07 Image information processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113301320B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837105A (en) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN113837106A (en) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN114125296A (en) * 2021-11-24 2022-03-01 广东维沃软件技术有限公司 Image processing method, image processing device, electronic equipment and readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090022396A1 (en) * 2007-07-06 2009-01-22 Tatsumi Watanabe Image processing device, image processing method, image processing system, program, storage medium, and integrated circuit
CN105245774A (en) * 2015-09-15 2016-01-13 努比亚技术有限公司 Picture processing method and terminal
US9672609B1 (en) * 2011-11-11 2017-06-06 Edge 3 Technologies, Inc. Method and apparatus for improved depth-map estimation
CN106937054A (en) * 2017-03-30 2017-07-07 维沃移动通信有限公司 Take pictures weakening method and the mobile terminal of a kind of mobile terminal
CN107948520A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method and device
CN108399596A (en) * 2018-02-07 2018-08-14 深圳奥比中光科技有限公司 Depth image engine and depth image computational methods
US20180357775A1 (en) * 2015-11-30 2018-12-13 Huawei Technologies Co., Ltd. Image processing method and dual camera system
CN110335211A (en) * 2019-06-24 2019-10-15 Oppo广东移动通信有限公司 Bearing calibration, terminal device and the computer storage medium of depth image
CN111246092A (en) * 2020-01-16 2020-06-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
WO2020116844A1 (en) * 2018-12-03 2020-06-11 Samsung Electronics Co., Ltd. Electronic device and method for acquiring depth information by using at least one of cameras or depth sensor
WO2020199563A1 (en) * 2019-04-01 2020-10-08 四川深瑞视科技有限公司 Method, device, and system for detecting depth information
CN112614057A (en) * 2019-09-18 2021-04-06 华为技术有限公司 Image blurring processing method and electronic equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090022396A1 (en) * 2007-07-06 2009-01-22 Tatsumi Watanabe Image processing device, image processing method, image processing system, program, storage medium, and integrated circuit
US9672609B1 (en) * 2011-11-11 2017-06-06 Edge 3 Technologies, Inc. Method and apparatus for improved depth-map estimation
CN105245774A (en) * 2015-09-15 2016-01-13 努比亚技术有限公司 Picture processing method and terminal
US20180357775A1 (en) * 2015-11-30 2018-12-13 Huawei Technologies Co., Ltd. Image processing method and dual camera system
CN106937054A (en) * 2017-03-30 2017-07-07 维沃移动通信有限公司 Take pictures weakening method and the mobile terminal of a kind of mobile terminal
CN107948520A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method and device
CN108399596A (en) * 2018-02-07 2018-08-14 深圳奥比中光科技有限公司 Depth image engine and depth image computational methods
WO2020116844A1 (en) * 2018-12-03 2020-06-11 Samsung Electronics Co., Ltd. Electronic device and method for acquiring depth information by using at least one of cameras or depth sensor
WO2020199563A1 (en) * 2019-04-01 2020-10-08 四川深瑞视科技有限公司 Method, device, and system for detecting depth information
CN110335211A (en) * 2019-06-24 2019-10-15 Oppo广东移动通信有限公司 Bearing calibration, terminal device and the computer storage medium of depth image
CN112614057A (en) * 2019-09-18 2021-04-06 华为技术有限公司 Image blurring processing method and electronic equipment
CN111246092A (en) * 2020-01-16 2020-06-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐仁东: "浅谈手机双摄像头技术", 《电子制作》 *
赵树磊等: "基于TOF相机的靶标识别与位姿测量系统设计", 《电子技术应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837105A (en) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN113837106A (en) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN114125296A (en) * 2021-11-24 2022-03-01 广东维沃软件技术有限公司 Image processing method, image processing device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN113301320B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
CN113301320B (en) Image information processing method and device and electronic equipment
US20180218485A1 (en) Method and apparatus for fusing plurality of depth images
CN111612820B (en) Multi-target tracking method, training method and device of feature extraction model
CN106981078B (en) Sight line correction method and device, intelligent conference terminal and storage medium
CN106570909B (en) Skin color detection method, device and terminal
EP3493523A2 (en) Method and apparatus for blurring preview picture and storage medium
US11042966B2 (en) Method, electronic device, and storage medium for obtaining depth image
WO2019037088A1 (en) Exposure control method and device, and unmanned aerial vehicle
CN112207821B (en) Target searching method of visual robot and robot
CN113643356B (en) Camera pose determination method, virtual object display method, device and electronic equipment
CN112532881B (en) Image processing method and device and electronic equipment
CN111126108A (en) Training method and device of image detection model and image detection method and device
CN112333385B (en) Electronic anti-shake control method and device
CN112291473B (en) Focusing method and device and electronic equipment
CN112446251A (en) Image processing method and related device
CN114119701A (en) Image processing method and device
EP4093015A1 (en) Photographing method and apparatus, storage medium, and electronic device
CN110874814A (en) Image processing method, image processing device and terminal equipment
CN111654623B (en) Photographing method and device and electronic equipment
CN114049670A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114565777A (en) Data processing method and device
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN107633498B (en) Image dark state enhancement method and device and electronic equipment
CN114390189A (en) Image processing method, device, storage medium and mobile terminal
CN112911091A (en) Parameter adjusting method and device of multipoint laser and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant