CN110555874A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110555874A
CN110555874A CN201810562100.1A CN201810562100A CN110555874A CN 110555874 A CN110555874 A CN 110555874A CN 201810562100 A CN201810562100 A CN 201810562100A CN 110555874 A CN110555874 A CN 110555874A
Authority
CN
China
Prior art keywords
image
region
area
mask
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810562100.1A
Other languages
Chinese (zh)
Other versions
CN110555874B (en
Inventor
柯政遠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810562100.1A priority Critical patent/CN110555874B/en
Publication of CN110555874A publication Critical patent/CN110555874A/en
Application granted granted Critical
Publication of CN110555874B publication Critical patent/CN110555874B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

The embodiment of the application discloses an image processing method and device, wherein the method comprises the following steps: the image processing device acquires a first area in a first image; acquiring a second area in the second image; then calculating the horizontal displacement difference between the position of the first area in the first image and the position of the second area in the second image, and determining the distance between the target object and the shooting equipment according to the horizontal displacement difference; the first image is an image which is shot by a first camera of the shooting device and comprises a target object, and the second image is an image which is shot by a second camera of the shooting device and comprises the target object. According to the method and the device, the horizontal displacement difference between the position of the first area on the first image and the position of the second area on the second image is used for representing the horizontal displacement difference of corresponding pixel points of the first image and the second image, so that the calculation amount of the images is reduced, and the working efficiency is improved.

Description

image processing method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus.
Background
With the development of computer technology, Augmented Reality (AR) and/or Virtual Reality (VR) applications are becoming more and more common. In AR and/or VR applications, object depth distance is a very important parameter. The object depth distance refers to a distance between the object to be measured and the photographing apparatus.
In the prior art, the measuring method of the object depth distance mainly comprises the following steps: comparing some related points of two images shot by the shooting equipment (one image is shot by a right camera in the shooting equipment, the other image is shot by a left camera in the shooting equipment, and the right camera and the left camera are positioned on a horizontal line), calculating the parallax of each pixel point on the image to form a parallax depth image, and performing depth calculation by using the parallax depth image to obtain the object depth distance. For example, stereo matching is performed on an image 1 shot by a left camera and an image 2 shot by a right camera in the shooting device, so that a relative horizontal displacement relation between corresponding pixel points of the image 1 and the image 2 is obtained. And obtaining a parallax depth map according to the relative horizontal displacement relation. Since the parallax and the object depth distance are in inverse proportion, the object depth distance can be estimated by using the parallax depth map. However, when the depth distance of the object is calculated, the parallax value of each corresponding pixel point in the whole image needs to be calculated, the image calculation amount is large, and the working efficiency is low.
disclosure of Invention
The embodiment of the application provides an image processing method and device, which can reduce the operation amount of images and improve the working efficiency.
In a first aspect, an embodiment of the present application provides an image processing method, including: the image processing device acquires a first area in a first image, wherein the first area is an image area corresponding to a target object in the first image. A second region is then acquired in the second image. The second area is the same as the first area in shape and size, the pixel difference between the second area and the first area is smaller than a pixel difference threshold value, and the second area is an image area corresponding to the target object in the second image. The image processing device calculates a horizontal displacement difference between a position of the first region in the first image and a position of the second region in the second image. Wherein, the horizontal displacement difference is used for representing the parallax between the first area and the second area. The distance between the target object and the shooting device can be determined according to the horizontal displacement difference. Wherein the first image is an image captured by a first camera of the image capturing apparatus, the second image is an image captured by a second camera of the image capturing apparatus, and the first camera and the second camera are located on a horizontal line. The first area is an image area corresponding to the target object in the first image, and the second area is an image area corresponding to the target object in the second image. According to the method and the device, the horizontal displacement difference between the position of the target object on the first image and the position of the target object on the second image is used for representing the horizontal displacement difference between the first image and the second image, and the horizontal displacement difference of corresponding pixel points of the first image and the second image does not need to be calculated, so that the calculation amount is reduced, and the working efficiency is improved.
With reference to the first aspect, in one possible implementation manner, the image processing apparatus may perform image segmentation on the first image to obtain the first region. Common image segmentation techniques include: threshold-based image segmentation, semantic-based image segmentation, edge detection-based image segmentation, and so forth. Because the image segmentation technology can better segment a foreground region (an image region corresponding to a target object in an image) and a background region (an image region obtained by subtracting the foreground region from the image), the first region obtained by the image segmentation technology is more accurate, and the accuracy of image processing can be improved.
With reference to the first aspect, in one possible implementation manner, when performing image segmentation, the image processing apparatus may determine, as the first region, a region that is formed by a plurality of target pixels in the first image. And the characteristic value of the target pixel point is within the range of the target threshold value. The image segmentation method based on the threshold segmentation is simple in calculation and high in calculation efficiency, so that the accuracy of image processing can be improved, and the calculation efficiency of the image processing can be improved.
with reference to the first aspect, in a possible implementation manner, before determining, as the first region, a region that is formed by at least two target pixel points in the first image, the image processing apparatus may further determine a reference region of the target object in the first image. And then obtaining the characteristic values of a plurality of pixel points in the reference region. And determining a target threshold range for image segmentation according to the characteristic values of a plurality of pixel points in the reference region. The first region after image segmentation is the image region corresponding to the target object in the first image, and the threshold range for image segmentation is obtained from the characteristic values of the pixel points in the reference region of the target object, so that the image segmentation precision can be improved, a more complete first region can be obtained, and the image processing precision is further improved.
With reference to the first aspect, in one possible implementation manner, the characteristic value may include a color value, a gray value, or a depth reference value. Wherein the depth reference value is used to represent a reference distance between the pixel point and the photographing apparatus. In the embodiment of the application, no matter the first image is a color image or a grayscale image, the first image can be segmented by adopting an image segmentation method based on a threshold value to obtain a first region corresponding to a target object in the first image, so that the calculation efficiency is improved, and a more complete image processing method can be provided.
with reference to the first aspect, in one possible implementation, when the image processing apparatus acquires the second region in the second image, one or more mask regions may be acquired in the second image, and each of the mask regions has the same shape and size as the first region. And determining the mask area with the pixel difference smaller than the pixel difference threshold value from the first area as the second area according to the pixel difference between each mask area and the first area. The embodiment of the application searches one or more mask regions for which the pixel difference from the first region is smaller than the pixel difference threshold value, so that the second region can be determined in the second image.
With reference to the first aspect, in a possible implementation manner, when the image processing apparatus acquires one or more mask regions in the second image, the image processing apparatus may determine a mask window corresponding to the first region according to the first region, level upper and lower edges of the mask window with upper and lower edges of the second image, and then translate the mask window in the second image to obtain one or more mask regions in the second image. The mask window and the first image are the same in size and shape, and the mask area and the first area are the same in shape and size. Because the upper and lower edges of the shielding window are flush with the upper and lower edges of the left view and then are translated. Therefore, the number of the mask areas in the second image can be reduced, so that the calculation amount is reduced, and the working efficiency is improved.
With reference to the first aspect, in a possible implementation manner, the pixel difference between each mask region and the first region may be a sum of color differences between each pixel point in each mask region and a corresponding pixel point in the first region. When the first image and the second image are both color images, the image processing device determines the pixel difference by calculating the color difference of the color values of the corresponding pixel points on the first area and the mask area, and determines the mask area with the sum of the color difference with the first area smaller than the color difference threshold as the second area. A more sophisticated image processing method is provided.
With reference to the first aspect, in a possible implementation manner, the pixel difference between each mask region and the first region may be a sum of gray scale differences between each pixel point in each mask region and a corresponding pixel point in the first region. When the first image and the second image are both gray level images, the image processing device determines pixel difference by calculating gray level difference of gray level values of corresponding pixel points on the first area and the mask area, and determines the mask area with the sum of the gray level difference with the first area smaller than a gray level difference threshold value as the second area. The image processing method of the embodiment of the application is still suitable for gray level images, has a wide application range, and provides a more complete image processing method.
With reference to the first aspect, in one possible implementation manner, if the first image is a color image and the second image is a grayscale image. Before the image processing device determines the masking region with the pixel difference smaller than the pixel difference threshold value from the first region as the second region according to the pixel difference between each masking region and the first region, the image processing device can also convert the color value of each pixel point in the first region to obtain the gray value of each pixel point. The image processing method according to the embodiment of the present application is also applicable to a case where the first image is a color image and the second image is a grayscale image. Further expanding the application range and providing a more perfect image processing method.
With reference to the first aspect, in one possible implementation manner, when determining the second region, the image processing apparatus may determine, as the second region, a mask region having a smallest pixel difference from the first region. According to the embodiment of the application, the mask area with the minimum pixel difference with the first area is searched in one or more mask areas to serve as the second area, the obtained pixel difference between the second area and the first area is minimum and more accurate, and therefore the accuracy of image processing can be improved.
With reference to the first aspect, in one possible implementation manner, when the image processing apparatus determines the distance between the target object and the shooting device according to the horizontal displacement difference, the shooting focal length when the first image and/or the second image is/are shot may be acquired. And then acquiring the spacing distance between the first camera and the second camera. And determining the distance between the target object and the shooting equipment according to the shooting focal length, the spacing distance and the horizontal displacement difference.
With reference to the first aspect, in one possible implementation, the image processing apparatus calculates a product of the photographing focal length and the separation distance. And determining the quotient of the product of the shooting focal length and the separation distance and the horizontal displacement difference as the distance between the target object and the shooting equipment. According to the embodiment of the application, the distance between the target object and the shooting equipment is calculated by utilizing the inverse relation between the horizontal displacement difference and the distance, the calculation is simple, and the calculation efficiency is high.
In a second aspect, an embodiment of the present application provides an image processing apparatus having a function of implementing the image processing method of the first aspect described above. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
with reference to the second aspect, in one possible implementation, the image processing apparatus includes a first obtaining module, a second obtaining module, a calculating module, and a determining module. The first obtaining module is configured to obtain a first region in a first image, where the first region is an image region corresponding to a target object in the first image. The second acquisition module is used for acquiring a second area in a second image. The second area and the first area are the same in shape and size, the pixel difference between the second area and the first area is smaller than a pixel difference threshold value, and the second area is an image area corresponding to the target object in the second image. The calculating module is configured to calculate a horizontal displacement difference between the position of the first region in the first image acquired by the first acquiring module and the position of the second region in the second image determined by the second acquiring module, where the horizontal displacement difference is used to represent a parallax between the first region and the second region. The determining module is used for determining the distance between the target object and the shooting device according to the horizontal displacement difference calculated by the calculating module. The first image is an image shot by a first camera of the shooting device, the second image is an image shot by a second camera of the shooting device, and the first camera and the second camera are located on a horizontal line.
in a third aspect, an embodiment of the present application provides another image processing apparatus, which includes a processor and a memory, the processor and the memory being connected to each other, wherein the memory is used for storing program codes;
the processor is used for calling the program code and executing the following operations:
A first region is acquired in the first image and a second region is acquired in the second image. The first area is an image area corresponding to a target object in the first image, the second area is the same as the first area in shape and size, the pixel difference between the second area and the first area is smaller than a pixel difference threshold value, and the second area is an image area corresponding to the target object in the second image. Calculating a horizontal displacement difference between a position of the first region in the first image and a position of the second region in the second image, the horizontal displacement difference indicating a parallax between the first region and the second region. The distance between the target object and the shooting device can be determined according to the horizontal displacement difference. Wherein the first image is an image captured by a first camera of the image capturing apparatus, the second image is an image captured by a second camera of the image capturing apparatus, and the first camera and the second camera are located on a horizontal line.
In a fourth aspect, embodiments of the present application provide a computer storage medium for storing computer program instructions for an image processing apparatus, which includes instructions for executing the program according to the first aspect.
by implementing the embodiment of the application, on one hand, the operation amount of the image can be reduced, and the working efficiency is improved. On the other hand, the accuracy of image processing can be improved, and more accurate object depth distance can be obtained.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below.
FIG. 1 is a right side view and a left side view of the same scene;
FIG. 2 is a schematic diagram of the trigonometric principle;
FIG. 3 is a schematic flow chart diagram of an image processing method provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a first region after image segmentation;
FIG. 5A is a schematic view of a mask region obtained on a left side view;
FIG. 5B is another schematic illustration of obtaining a mask area on a left side view;
FIG. 5C is a schematic view of another method for obtaining a mask region on a left side view
FIG. 6A is a schematic illustration of a horizontal displacement difference;
FIG. 6B is a schematic illustration of another horizontal displacement difference;
FIG. 6C is a schematic illustration of yet another horizontal displacement difference;
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The image processing method provided by the embodiment of the application can be applied to an application scene of object depth measurement. In some possible embodiments, the specific depth measurements are: firstly, matching left and right views by using a stereo matching method, and finding projection points of each point in a shooting scene on the left view and the right view respectively. The proxels appear as pixel points on the left and right views. And then calculating the relative horizontal displacement between each projection point on the left view and the corresponding projection point on the right view by utilizing the principle of triangulation to obtain a parallax depth map generated by the relative horizontal displacement of each projection point (pixel point) on the left view or the right view. Since the parallax is in inverse proportion to the depth distance, the object depth distance can be estimated by using the parallax depth map. Wherein, the image shot by the right camera in the shooting equipment is a right view, the image shot by the left camera is a left view, and the left camera and the right camera are positioned on a horizontal line. For example, as shown in FIG. 1, FIG. 1 is a right view and a left view of the same scene. Since the right camera and the left camera are on the same horizontal straight line in the photographing apparatus and there is a fixed distance between the right camera and the left camera, there is a slight horizontal difference between the right view and the left view. The stereo matching may include local stereo matching and global stereo matching. As shown in fig. 2, fig. 2 is a schematic view of the principle of triangulation. Taking any scene point in the shooting scene as an example, P represents a scene point in the shooting scene. ZPRepresenting the distance between point P and the shooting device, i.e. the depth distance of the scene point P, and f representing the shooting focal length of the left camera and/or the right camera in the shooting device. PrRepresenting the projected point of point P on the right view, Plrepresenting the projected point of point P on the left view. O islIndicating the location point of the left camera in the photographing apparatus, OrIndicating the location point of the right camera in the photographing apparatus, OlAnd OrIs B. PilIndicating the maximum imaging range of the left camera,. pirRepresents the maximum imaging range of the right camera, wherel=πr。Xlrepresents PlHorizontal distance, X, from the leftmost side of the left viewrRepresents PrHorizontal distance to the leftmost side of the right view. Relative horizontal displacement d ═ Xl-Xrl. Due to the Delta PPlPrand delta POlOrSimilarly, one can obtain:
Wherein, formula (1) is simplified to ZP=Bf/(Xl-Xr) Bf/d. Similarly, the relative horizontal displacement of each pixel point on the left or right view can be calculated. Object depth distance Representing the mean of the relative horizontal displacements of all the pixel points on the left or right view. In the embodiment of the application, on one hand, when the object depth is measured, the object depth distance is estimated through the relative horizontal displacement of each pixel point on the left view and the corresponding pixel point of the right view. Therefore, the relative horizontal displacement of each pixel point on the image needs to be calculated, so the calculation amount is large and the working efficiency is low. On the other hand, the calculated average value of the relative horizontal displacements of all the pixel points on the left or right view is calculated, the object depth distance represents the distance between the object to be measured and the shooting equipment, and the relative horizontal displacement of the area where the object is located is represented by the average value of the relative horizontal displacements of all the pixel points on the image, so that the calculated object can be caused to be obtainedthe depth distance is inaccurate.
The image processing method provided by the embodiment of the application can be applied to image processing devices with image processing functions, such as smart phones, IPADs, desktop computers, notebook computers and the like. The shooting device of the embodiment of the application can be integrated on the image processing device and can also exist independently of the image processing device. The embodiments of the present application are not limited.
The following describes an image processing method and apparatus provided in an embodiment of the present application with reference to fig. 3 to 8.
In order to better understand and implement the solution of the embodiment of the present application, the photographing apparatus according to the embodiment of the present application may include a plurality of cameras (here, 2 or more), an image photographed by a right camera in the photographing apparatus is defined as a right view, an image photographed by a left camera in the photographing apparatus is defined as a left view, and the left camera and the right camera are located on a horizontal line. The first image in the image processing method provided by the embodiment of the application may be a right view or a left view. If the first image is a right view, the second image is a left view; and if the first image is the left view, the second image is the right view. Since the shooting parameters of the left camera and the right camera in the shooting device are the same, the sizes and resolutions of the left view and the right view are the same. Referring to fig. 3, fig. 3 is a schematic flowchart of an image processing method provided in an embodiment of the present application. For convenience of description, in the image processing method shown in fig. 3, the first image is taken as a right view for example to explain. As shown in fig. 3, the image processing method provided in the embodiment of the present application may include:
S101, the image processing device acquires a first area in a first image.
The first region may be an image region surrounded by a contour of the target object in the right view, or may be an image region including the contour of the target object in the right view (for example, a regular geometric shape region including the contour of the target object), which is not limited in the embodiment of the present application.
S102, the image processing device acquires a second area in the second image.
The second area may be an image area surrounded by the contour of the target object in the left view, or an image area including the contour of the target object in the left view.
S103, the image processing apparatus calculates a horizontal displacement difference between the position of the first region in the first image and the position of the second region in the second image.
In this case, since the right camera and the left camera are on a horizontal straight line and there is a fixed distance between the right camera and the left camera in the photographing apparatus, there is a slight horizontal difference between the right view and the left view, and thus there is also a slight horizontal difference (i.e., a horizontal displacement difference) between the first region in the right view and the second region in the left view. The horizontal displacement difference can be expressed in the form of pixel points and also can be expressed in the form of physical distance. For example, the horizontal displacement difference may be 30 pixel points, or may be 0.2 cm.
And S104, the image processing device determines the distance between the target object and the shooting equipment according to the horizontal displacement difference.
In the embodiment of the present application, for the step S101, there may be some possible implementation manners as follows:
In some possible embodiments, the image processing apparatus obtains a right view (referred to as a first image) including the target object captured by a right-side camera (referred to as a first camera) in the capturing device, identifies the target object in the right view by using an image recognition technique in Artificial Intelligence (AI), and marks the target object on the right view. The image processing apparatus may take the marked image area in the right view as the first area. Alternatively, the target object captured by the right camera is presented entirely in the right view. The target object can be any object to be measured in a shooting scene.
in some possible embodiments, the image processing apparatus may perform image segmentation on the right view by using an image segmentation technique to obtain the first region. Image segmentation techniques may include threshold-based image segmentation, semantic-based image segmentation, edge detection-based image segmentation, and so forth, among others. Because the image segmentation technology can better segment a foreground region (an image region corresponding to a target object in an image) and a background region (an image region obtained by subtracting the foreground region from the image), the first region obtained by utilizing the image segmentation technology is more accurate, and the accuracy of image processing can be improved.
In some possible embodiments, the image processing apparatus may perform image segmentation on the right view by using a threshold-based image segmentation method to obtain the first region. Specifically, the image processing apparatus may obtain a target threshold range for image segmentation, further obtain a feature value of each pixel point in the right view, and determine, as the first region (foreground region), a region formed by a plurality of (here, greater than or equal to 2) target pixel points in the target threshold range, where the feature value of the pixel point in the right view is within the target threshold range. And then a region formed by a plurality of (here, more than or equal to 2) pixel points of which the characteristic values of the pixel points in the right view are outside the target threshold range is used as a background region. Since the pixel points in the image have sizes and are usually square, even 2 pixel points located on one line can determine an area, which can be the first area. The first region may be a region formed by a plurality of pixels, or may be a minimum continuous region including the plurality of pixels. The target threshold range may be a threshold range preset by a user, or may be a threshold range calculated by the image processing device according to the image feature of the right view, where the target threshold range may be a color value range, a gray value range, or a depth reference value range, and the feature value of the corresponding pixel point may include a color value, a gray value, or a depth reference value. Here, the color value may be a red (red) green (green) blue (blue) value, i.e., an RGB value, or a color space value, i.e., a YCBCR value. For example, the color value range is 189-205, the gray value range is 124-156, etc. The depth reference value may be a distance between each pixel point in the right view and the photographing apparatus, for example, the depth reference value may be Z in the above formula (1)P. If the right view is a color image, the characteristic values of the pixels may include color values and/or depth reference values. If the right view is a gray image, the characteristic values of the pixel points may include a gray value and/or a depth reference value. As shown in fig. 4, fig. 4 is a schematic diagram of the first region after image segmentation. Assuming that the right view R is a color image and the color value range is 189-205, a region composed of a plurality of pixels with color values of pixels outside 189-205 (the color value is smaller than 189 or the color value is larger than 205) in the right view is determined as a background region, and the color values of all pixels in the background region are all set to 0, such as the black region in fig. 4. Determining a region formed by a plurality of target pixel points with color values of 189-205 of pixel points in the right view as a first region, and setting the color values of the first region to be 255, such as a white region in fig. 4. In the embodiment of the application, the image segmentation method based on the threshold segmentation is simple in calculation and high in calculation efficiency, so that the accuracy of image processing can be improved, and the calculation efficiency of the image processing can also be improved.
In some possible embodiments, the obtaining manner of the target threshold range is specifically:
1) The image processing device determines a reference area corresponding to the target object in the right view. For example, the image processing apparatus may recognize the right view by using an image recognition technology such as pattern recognition, a support vector machine, and the like, recognize the target object in the right view, and determine an image area occupied by the target object recognized in the right view as a reference area, or may determine an area according to a frame or click operation on the right view by a user as a reference area corresponding to the target object. The reference area is used to reflect the preliminary positioning of the target object in the right view, for example, the reference area may be an image area in the right view that is larger than the outline of the target object, or an image area in the right view that is smaller than the outline of the target object, so that the image area where the target object is located in the right view, that is, the first area, may be further determined more accurately.
2) After determining the reference region corresponding to the target object, the image processing device acquires the feature values of a plurality of (here, greater than or equal to 2) pixel points in the reference region. For example, the image processing device samples all the acquired pixel points in the reference region to obtain a plurality of sampling points in the reference region, and acquires characteristic values of the plurality of sampling points. The image processing apparatus may also extract all the feature points in the reference region, and acquire feature values of all the feature points in the reference region. The characteristic value of the pixel point may include a color value, a gray value, or a depth reference value. The depth reference value may be used to represent a reference distance between each pixel point and the photographing device.
3) And the image processing device determines a target threshold range for image segmentation according to the characteristic values of a plurality of pixel points in the reference region. For example, if the right view is a color image, the image processing apparatus obtains the color values and the depth reference values of all the pixel points in the reference region, and may calculate the color mean value C of all the pixel points in the reference region according to the obtained color values and the depth reference valuesaveColor standard deviation Cδmean value of depth DaveAnd depth standard deviation Dδ. The target threshold range may be (C)ave-Cδ)~(Cave+Cδ) And (D)ave-Dδ)~(Dave+Dδ). Therefore, the image processing device can acquire the color value and the depth reference value of each pixel point in the first image, and can make the color value of the pixel point in the first image be (C)ave-Cδ)~(Cave+Cδ) Within the range, and the depth reference value of the pixel point is (D)ave-Dδ)~(Dave+Dδ) And determining the area formed by all the pixel points in the range as the first area. Further, the color value of the pixel point in the right view is within the range (C)ave-Cδ)~(Cave+Cδ) And/or the depth reference value of the pixel point is (D)ave-Dδ)~(Dave+Dδ) All images outside the rangethe area composed of the pixel points is used as the background area. In the embodiment of the application, the image processing device determines the reference region of the target object on the first image, and then determines the target threshold range for image segmentation according to the characteristic values of the pixel points in the reference region. On one hand, the threshold range for image segmentation is not required to be set by a user, and manual processing links are reduced. On the other hand, the reference region of the target object is an approximate region of the target object on the right view, the target threshold range for image segmentation is obtained from the characteristic values of pixel points in the approximate region of the target object on the right view, and the first region obtained by image segmentation is a more accurate image region of the target object in the right view, so that the image segmentation precision can be improved, a more complete first region is obtained, and the image processing precision is further improved.
In the embodiment of the present application, for the step S102, there may be some possible implementation manners as follows:
In some possible embodiments, the image processing apparatus may acquire the second region in the second image by using image segmentation or AI recognition. The method for acquiring the second region may be the same as or different from the method for acquiring the first region, and this embodiment of the present application is not limited. The pixel difference threshold may be user-defined, or determined by the image processing apparatus according to the feature points of the first image and the second image. For example, a first feature point corresponding to the target object in the first image and a second feature point corresponding to the first feature point in the second image are respectively extracted, a pixel difference between the first feature point and the second feature point is calculated, and a mean value of the pixel difference is used as a pixel difference threshold.
In some possible embodiments, the obtaining manner of the second area may specifically be:
1) Based on the first region obtained as described above, the image processing apparatus may obtain one or more mask regions in the left view (referred to as the second image). Each of the plurality of mask regions is the same in shape and size as the first region. The left view is an image including the target object captured by a left camera (referred to as a second camera) in the photographing apparatus. Alternatively, the target object captured by the left camera is fully presented in the left view. Fig. 5A is a schematic view of obtaining a mask region on a left side view, as shown in fig. 5A. The first region a1 is derived from the right view R, the image processing apparatus determines the mask window a1 according to the first region a1, and specifically may determine the mask window corresponding to the first region according to the size, shape and position of the first region in the right view, the slash region in a1 is an opaque region, the c1 region in a1 corresponding to a1 is a transparent region, that is, the position of c1 in the mask window a1 is the same as the position of a1 in the right view R, and the size and shape of c1 are the same as those of a 1. The mask window a1 is masked on the left view L, the region of the left view L shown through the c1 region in the mask window a1 is the mask region b1, and the mask window a1 is moved on the left view L, so that different mask regions b1 can be obtained, and the shape and the size of the mask region b1 are the same as those of the first region a 1. The cover window a1 may move left and right in the left view L or may move up and down in the left view L. When the mask window a1 moves on the left view L, a fixed number of pixels may be moved each time, such as 5 pixels or 1 pixel, or different numbers of pixels may be moved each time, such as 10 pixels for the first time, 8 pixels for the second time, and so on.
In some possible embodiments, the image processing device translates the upper and lower edges of the mask window after being respectively aligned with the upper and lower edges of the left view to obtain at least one mask region in the left view, and the image processing device may record the number of pixels translated on the left view of the mask window corresponding to each mask region in the at least one mask region. As shown in fig. 5B, the image processing apparatus determines a mask window a2 according to the size, shape, and position of the first region a1, thereby determining four vertices of the mask window a2, such as vertices 1,2,3,4 shown in fig. 5B. The slash region in a2 is an opaque region, and the c1 region in a2 corresponding to a1 is a transparent region. Vertices 1 ', 2', 3 ', 4' are used to represent 4 vertices on the left view L, respectively. The upper and lower edges of the mask window are respectively flush with the upper and lower edges of the left view: the upper edge of a2 determined by the 1,2 points in a2 is collinear with the upper edge of L determined by the 1 'and 2' points of L, and the lower edge of a2 determined by the 3,4 points in a2 is collinear with the lower edge of L determined by the 3 'and 4' points of L. It is possible to align the vertex 2 in the mask window a2 with the vertex 1 'in the left view L and the vertex 4 in the mask window a2 with the vertex 3' in the left view L, from which point the mask window a2 is shifted to the right, thereby masking on L, discarding the area of the left view L not completely masked by the c1 area in a2 during the movement, and taking the area of the left view L displayed through the c1 area, which is the same size and shape as the a1 area, as the mask area b 2. Similarly, the mask window a2 may be aligned with the vertex 2 'in the left view L at the vertex 1, and the mask window a2 may be aligned with the vertex 4' in the left view L at the vertex 3, and from this point on, the mask window a2 may be shifted to the left, so as to mask on L, and the area of the left view L that is not completely masked by the c1 area in the a2 and displayed on the left view L is discarded, and the area of the left view L that is displayed through the c1 area and has the same size and shape as the a1 is used as the mask area b 2. In this embodiment, since the left camera and the right camera in the shooting device are on the same horizontal line, which indicates that the longitudinal positions of the left camera and the right camera are the same, the longitudinal positions of the target object in the left view and the right view are also the same, that is, only the horizontal difference exists between the left view and the right view, the upper edge and the lower edge of the mask window are respectively leveled with the upper edge and the lower edge of the left view, and then the mask is translated, so that the number of mask regions in the left view can be reduced, the image processing workload is reduced, and the work efficiency is improved.
In some possible embodiments, as shown in fig. 5C, fig. 5C is a schematic diagram of another method for obtaining a mask region on a left side view. Wherein, because the target object shot by the right camera in the shooting device is deviated to the left area in the right view, the target object shot by the left camera is deviated to the right area in the left view. Therefore, the image processing apparatus can align 4 vertices in the mask window A3 with corresponding vertices in the left view L (i.e., vertex 1 is aligned with vertex 1 ', vertex 2 is aligned with vertex 2', vertex 3 is aligned with vertex 3 ', and vertex 4 is aligned with vertex 4'), from which point the mask window A3 is translated rightward, thereby masking on L, discarding the area of the left view L that is not completely masked by the c1 area in A3 during the movement, and using the area of the left view L that is displayed through the c1 area and has the same size and shape as a1 as the mask area b 3.
2) The image processing device may obtain a pixel difference between each of the one or more mask regions and the first region, so as to determine a mask region having a smallest pixel difference from the first region among the one or more mask regions as the second region. The pixel difference may include a color or gray difference between the pixels. The second area may be an image area corresponding to the target object in the left view. For example, assuming that the image processing device acquires 200 mask regions in total in the left view, the image processing device may acquire the pixel difference of each of the 200 mask regions from the first region. The mask region having the smallest difference in pixels from the first region among the 200 mask regions is determined as the second region.
In this embodiment, since the first region is used as a reference to search the second region, the shapes and sizes of the second region and the first region are the same, and the pixel difference is smaller than a smaller pixel difference threshold, interference caused by other image regions which are not the target object in the left view to the acquisition of the second region is eliminated, so that the pixel difference between the first region and the second region is ensured to be small enough, the subsequent calculation of the horizontal displacement difference between the position of the first region in the right view and the position of the second region in the left view is more accurate, and the obtained distance between the target object and the shooting device is more accurate.
In some possible embodiments, if the right view (referred to as the first image) and the left view (referred to as the second image) are color images, the image processing apparatus may obtain a sum of color differences between each pixel point in each mask region and a corresponding pixel point in the first region, and determine a mask region in each mask region having a smallest sum of color differences from the first region as the second region. For example, assume that the 70 th mask region includes A70,B70,C70,D70,E70These 5 pixels. The first region comprises A'30,B′30,C′30,D′30,E′30These 5 pixels. A. the70And A'30corresponds to, B70And B'30corresponds to, C70And C'30Corresponds to, D70And D'30Corresponds to, E70And E'30And (7) corresponding. The image processing device can respectively calculate A in the 70 th mask region70,B70,C70,D70,E70These 5 pixel points and 5 pixel points a 'corresponding to the first region'30,B′30,C′30,D′30,E′30Color difference Cd between color values ofA、CdB、CdC、CdDAnd CdE. Wherein CdACan be as follows:
Wherein R in the formula (2)ACan represent A70the red component, R, in the RGB color value of this pixelA′may represent A'30the pixel point is the red component of the RGB color value. GACan represent A70The green component, G, in the RGB color value of this pixelA′May represent A'30The pixel point is the green component of the RGB color value. B isAcan represent A70The blue component, B, in the RGB color value of this pixelA′May represent A'30The pixel point has a blue component in the RGB color value. Cd [ Cd ]B、CdC、CdDAnd CdEcan be prepared from CdAthe same can be obtained. The image processing device can calculate the color difference sum Cd between the color value of each mask region and the first regionGeneral assembly=CdA+CdB+CdC+CdD+CdEAnd minimizing the sum of color differences (Cd) from the first areaGeneral assembly)minIs determined as the second area. In the embodiment of the application, if the right view is combined withThe left view is a color image, and the image processing device determines the pixel difference by calculating the RGB color components of the pixel points on the first region and the mask region, thereby providing a more perfect image processing method.
In some possible embodiments, if both the right view (referred to as the first image) and the left view (referred to as the second image) are grayscale images, the image processing apparatus may obtain a sum of grayscale differences between grayscale values of each pixel point in each mask region and a corresponding pixel point in the first region, and determine a mask region in each mask region having a smallest sum of grayscale differences with the first region as the second region. For example, assume that the 100 th mask region includes A100,B100,C100,D100,E100These 5 pixels. The first region comprises A'30,B′30,C′30,D′30,E′30These 5 pixels. A. the100And A'30Corresponds to, B100And B'30Corresponds to, C100And C'30Corresponds to, D100and D'30Corresponds to, E100And E'30and (7) corresponding. The image processing apparatus may calculate a in the 100 th mask region respectively100,B100,C100,D100,E100The 5 pixel points and corresponding pixel points A 'in the first area'30,B′30,C′30,D′30,E′30Gray difference sum GreyD:
Wherein | Grey in the formula (3)A-GreyA′I represents the pixel point A100Gray value ofAAnd pixel point A'30Gray value ofA′The absolute difference of (c). | GreyB-GreyB′I represents the pixel B100Gray value ofBAnd pixel point B'30gray value ofB′The absolute difference of (c). | GreyC-GreyC′i represents a pixel point C100Gray value ofCAnd pixel point C'30Gray value ofC′the absolute difference of (c). | GreyD-GreyD′i represents a pixel D100gray value ofDand pixel point D'30Gray value ofD′The absolute difference of (c). | GreyE-GreyE′I represents the pixel E100Gray value ofEAnd pixel point E'30Gray value ofE′The absolute difference of (c). The image processing apparatus may calculate a sum GreyD of gray differences between gray values of each of the mask regions and the first region, and minimize the sum GreyD of gray differences from the first region (GreyD)minIs determined as the second area. In the embodiment of the application, if the left view and the right view are both gray level images, the image processing device can determine the pixel difference by calculating the absolute difference value of the gray level values of the pixel points on the first region and the mask region, which shows that the image processing method in the embodiment of the application is still suitable for the gray level images, the application range is wide, and a more complete image processing method is provided.
In some possible embodiments, if the right view (referring to the first image) is a color image and the left view (referring to the second image) is a gray scale image, for each pixel point in the first region, the image processing apparatus may convert the color value of the pixel point in the first region to obtain the gray scale value of the pixel point. And then obtaining the gray difference sum of the gray values of each pixel point in each mask region and the corresponding pixel point in the first region. And a mask region having the smallest sum of differences in gray levels from the first region may be determined as the second region. For example, the image processing apparatus may convert the color value of each pixel point in the first region into the Gray value corresponding to the pixel point according to the conversion formula of RGB color and Gray value, Gray ═ R0.299 + G0.587 + B0.114. And then, calculating the gray difference sum GreyD between the gray values of each mask region and the first region by using the formula (3), and determining the mask region with the minimum gray difference sum with the first region as a second region. In the embodiment of the present application, if one of the left and right views is a color image and the other is a gray image, if the right view is a color image and the left view is a gray image, the image processing apparatus may convert the color value of the first region in the right view into a gray value, and then calculate the absolute difference between the gray values of the first region and the mask region to determine the pixel difference. The image processing method provided by the embodiment of the application is also applied to the situation that one image is in color and the other image is in gray scale, so that the application range is further expanded.
in the embodiment of the present application, for the step S103, there may be some possible implementation manners as follows:
In some possible embodiments, for the manner of acquiring the mask region shown in fig. 5A, the image processing device may determine a first reference point in the first region, and determine a second reference point corresponding to the first reference point in the second region. The image processing means calculates the lateral position of the first reference point on the right view and the lateral position of the second reference point on the left view. Thereby calculating a horizontal displacement difference of the lateral position of the first reference point on the right view and the lateral position of the second reference point on the left view. Wherein, the first reference point is the gravity center point of the first area, and the second reference point is the gravity center point of the second area. Or the first reference point is a leftmost (rightmost) pixel point in the first region, and the second reference point is a leftmost (rightmost) pixel point in the second region. As shown in fig. 6A, fig. 6A is a schematic diagram of a horizontal displacement difference. Wherein the image processing apparatus determines the center of gravity point of the first region a1 as the first reference point RP1 according to the shape of the first region a 1. Since the first area a1 shown in fig. 6A is circular, its center of gravity point is the center of the circle. Similarly, the image processing apparatus may determine the second reference point RP2 in the second region b 1. The image processing device calculates the number P1 of the leftmost pixels of the first reference point RP1 from the right view to be 79, and calculates the number P2 of the leftmost pixels of the second reference point RP2 from the left view to be 103. And obtaining an absolute difference value 24 between the number P1 of the pixels of the first reference point RP1, which are farthest from the right view, and the number P2 of the pixels of the second reference point RP2, which are farthest from the left view. And finally, converting the 24 pixel points of the absolute difference into a horizontal displacement difference. The conversion of pixels to centimeters may be: the actual size (in) is pixel/resolution, and 1 in is 2.54 cm.
In some possible embodiments, for the manner of obtaining the mask regions shown in fig. 5B, the image processing device records the number of pixel points shifted on the left view of the mask window corresponding to each mask region in the at least one mask region. The image processing apparatus may calculate a lateral position of the first region in the right view, and may obtain the recorded number of pixel points of the mask window corresponding to the second region translated on the left view, that is, the lateral position of the second region in the left view. The image processing apparatus may calculate a horizontal displacement difference of a lateral position of the first region in the right view and a lateral position of the second region in the left view. Fig. 6B is a schematic diagram of another horizontal displacement difference, as shown in fig. 6B. The distance between the leftmost pixel point of the first area a1 and the leftmost R1 of the right view is 100 pixel points, that is, the horizontal position of the first area a1 on the right view R is 100 pixel points. The mask window a2 corresponding to the second region b2 starts to translate rightward from the leftmost position of the left view L, and the number R2 of the pixels that are translated together when c1 and b2 coincide in a2 is recorded as 135, that is, the horizontal position of the second region b2 on the left view L is 135 pixels. The image processing device calculates the absolute difference value of 35 pixel points between 100 pixel points at the transverse position of the first area in the right view and 135 pixel points at the transverse position of the second area in the left view. The image processing apparatus may obtain a resolution of the right view/left view, and convert the 35 pixels of absolute difference into a horizontal displacement difference according to the resolution.
Similarly, the rightmost pixel point of the first area a1 is 120 pixel points away from the rightmost R1 of the right view, that is, the horizontal position of the first area a1 on the right view R is 120 pixel points. The mask window a2 corresponding to the second region b2 starts to translate leftward from the rightmost position of the left view L, and the number R2 of pixels that are translated together when c1 and b2 coincide in a2 is recorded as 85, that is, the horizontal position of the second region b2 on the left view L is 85 pixels. The image processing device calculates the absolute difference value of 35 pixel points between 120 pixel points at the transverse position of the first area in the right view and 85 pixel points at the transverse position of the second area in the left view. The image processing apparatus may obtain a resolution of the right view/left view, and convert the 35 pixels of absolute difference into a horizontal displacement difference according to the resolution.
In some possible embodiments, for the way of obtaining the mask region shown in fig. 5C, the image processing device records the number of pixels shifted in the left view of the mask window A3 corresponding to each mask region b 3. The image processing apparatus may obtain the number of pixels of the mask window a3 corresponding to the second region translated in the left view, and convert the number of pixels into a horizontal displacement difference. Fig. 6C is a schematic diagram of yet another horizontal displacement difference, as shown in fig. 6C. The mask window A3 corresponding to the second region b3 is shifted rightward from the time when the four vertices of the left view L are aligned with the four vertices of A3, and the number Pa of co-shifted pixels when c1 and b3 are overlapped in A3 is recorded as 35. The image processing device acquires Pa, and converts Pa into a horizontal displacement difference by 35 pixel points.
In the embodiment of the present application, for the step S104, there may be some possible implementation manners as follows:
in some possible embodiments, the right camera and the left camera of the shooting device have the same shooting focal length. The image processing apparatus may acquire a photographing focal length when photographing the right view and/or the left view. The image processing apparatus may further acquire a separation distance between the right camera (referred to as a first camera) and the left camera (referred to as a second camera). The image processing apparatus may calculate a product of the photographing focal length and the separation distance, and determine a quotient of the product of the photographing focal length and the separation distance divided by the calculated horizontal displacement difference as a distance between the target object and the photographing device. For example, Z may be used to indicate the distance between the target object and the photographing apparatus, F may be used to indicate the photographing focal length of the photographing apparatus, B may be used to indicate the distance between the right camera and the left camera of the photographing apparatus, and d may be used to indicate the horizontal displacement difference calculated as described above. And the distance Z between the target object and the shooting device is B F/d. According to the embodiment of the application, only the horizontal displacement difference is calculated once, the horizontal displacement difference of corresponding pixel points of the right view and the left view does not need to be calculated, and the distance between the target object and the shooting equipment is determined according to the horizontal displacement difference, so that the calculation amount is reduced, and the working efficiency is improved.
In the embodiment of the present application, the image processing apparatus acquires a first region in a right view (referred to as a first image), and then acquires a second region in a left view. The first area is an image area corresponding to a target object in a right view, the second area is the same as the first area in shape and size, the pixel difference between the second area and the first area is smaller than a pixel difference threshold value, and the second area is an image area corresponding to the target object in a left view. Then, the image processing device calculates a horizontal displacement difference between the position of the first region in the right view and the position of the second region in the left view, and determines the distance between the target object and the shooting device according to the relative horizontal displacement difference, wherein the horizontal displacement difference is the parallax between the first region and the second region. According to the embodiment of the application, the horizontal displacement difference between the position of the target object on the right view and the position of the target object on the left view is used for representing the horizontal displacement difference between the right view and the left view, and the horizontal displacement difference of corresponding pixel points of the right view and the left view does not need to be calculated, so that the calculation amount is reduced, and the working efficiency is improved.
The method of the embodiment of the present application is explained in detail above, and in order to better implement the above-mentioned scheme of the embodiment of the present application, the embodiment of the present application further provides a corresponding apparatus.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 7, the image processing apparatus 70 may include:
A first obtaining module 701, configured to obtain a first region in a first image. The first area is an image area corresponding to a target object in the first image.
A second acquiring module 702 is configured to acquire a second region in a second image. The second area is the same as the first area in shape and size, the pixel difference between the second area and the first area is smaller than a pixel difference threshold value, and the second area is an image area corresponding to the target object in the second image.
A calculating module 703, configured to calculate a horizontal displacement difference between the position of the first region in the first image acquired by the first acquiring module 701 and the position of the second region in the second image acquired by the second acquiring module 702. Wherein, the horizontal displacement difference is used for representing the parallax between the first area and the second area.
a determining module 704, configured to determine a distance between the target object and the shooting device according to the horizontal displacement difference calculated by the calculating module 703.
Wherein the first image is an image captured by a first camera of the image capturing apparatus, the second image is an image captured by a second camera of the image capturing apparatus, and the first camera and the second camera are located on a horizontal line.
In some possible embodiments, the first obtaining module 701 is specifically configured to perform image segmentation on the first image to obtain the first region.
in some possible embodiments, the first obtaining module 701 is specifically configured to determine, as the first region, a region jointly formed by a plurality of target pixel points in the first image. And the characteristic value of the target pixel point is within the range of the target threshold value.
In some possible embodiments, the first obtaining module 701 is further configured to determine a reference region corresponding to the target object in the first image, obtain feature values of a plurality of pixel points in the reference region, and determine a target threshold range for image segmentation according to the feature values of the plurality of pixel points in the reference region. Wherein, the reference area is used for reflecting the preliminary positioning of the target object in the first image.
In some possible embodiments, the characteristic value includes a color value, a gray value, or a depth reference value, and the depth reference value is used to indicate a reference distance between the pixel point and the photographing device.
In some possible embodiments, the second obtaining module 702 is specifically configured to obtain one or more mask regions in the second image, where each of the mask regions has a same shape and size as the first region, and determine, as the second region, a mask region having a pixel difference from the first region smaller than a pixel difference threshold according to a pixel difference between each of the mask regions and the first region.
In some possible embodiments, the second obtaining module 702 is specifically configured to determine a mask window corresponding to the first region according to the first region, level upper and lower edges of the mask window with upper and lower edges of the second image, and then translate the mask window to obtain one or more mask regions in the second image. Wherein, the size and shape of the mask window are the same as those of the first image.
In some possible embodiments, the pixel difference between each of the mask regions and the first region is a sum of color differences between each pixel point in each of the mask regions and a corresponding pixel point in the first region.
In some possible embodiments, the pixel difference between each of the mask regions and the first region is a sum of gray-scale differences between each pixel point in each of the mask regions and a corresponding pixel point in the first region.
In some possible embodiments, the image processing apparatus 70 further includes: a converting module 705, configured to convert the color value of each pixel point in the first region acquired by the first acquiring module 701 to obtain a gray value of each pixel point.
In some possible embodiments, the second obtaining module 702 is specifically configured to determine, as the second region, a mask region with the smallest pixel difference from the first region according to the pixel difference between each mask region and the first region.
In some possible embodiments, the determining module 704 is specifically configured to obtain a shooting focal length when the first image and/or the second image are shot, obtain a separation distance between the first camera and the second camera, and determine a distance between the target object and the shooting device according to the shooting focal length, the separation distance, and the horizontal displacement difference.
In some possible embodiments, the determining module 704 is further specifically configured to calculate a product of the shooting focal length and the separation distance, and determine a quotient obtained by dividing the product of the shooting focal length and the separation distance by the horizontal displacement difference as the distance between the target object and the shooting device.
in a specific implementation, the implementation of each module may also correspond to the corresponding description of the method embodiment shown in fig. 3, and perform the method and functions performed in the foregoing embodiment.
Referring to fig. 8, fig. 8 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present application. As shown in fig. 8, the image processing apparatus 100 may include: a processor 110 and a memory 120 (one or more computer-readable storage media). These components may communicate over one or more communication buses 130.
The processor 110 may include an Application Processor (AP) and an Image Signal Processor (ISP). The AP and the ISP may be two relatively independent components or may be integrated on one integrated chip.
The memory 120 is coupled to the processor 110 for storing various software programs and/or sets of instructions. In particular implementations, memory 120 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 120 may store an operating system (hereinafter referred to simply as a system), such as WINDOWS, LINUX, ANDROID, IOS, etc. The memory 120 may also store a network communication program that may be used to communicate with one or more additional devices, one or more terminal devices, one or more network devices. The memory 120 may further store a user interface program, which may vividly display the content of the application program through a graphical operation interface, and receive a control operation of the application program from a user through input controls such as application icons, menus, dialog boxes, and buttons. The memory 120 may also store one or more application programs. As shown in fig. 8, these applications may include: cameras, galleries, and other applications, among others. In the present application, the memory 120 may be used to store a computer program implementing the image processing method shown in fig. 3. The processor 110 calls the computer program stored in the memory 120 to implement the image processing method shown in fig. 3.
In some possible implementations, the image processing apparatus 100 may further include a communication part 140, a power management part 150, and a peripheral system (I/O) 160. The communication section 140 described above may control a communication connection between the image processing apparatus 100 and another communication device. The communications component 140 may include a radio frequency component, a cellular component, and the like. The communication section 140 may provide a wireless communication function by using radio frequency. Alternatively, the communication section 140 may include a network interface, a modulator/demodulator (modem), and the like for connecting the image processing apparatus 100 to a network (e.g., the internet, a local area network, a wide area network, a telecommunication network, a cellular network, a satellite network, a plain old telephone service, and the like).
the power management unit 150 is mainly used to provide stable and high-precision voltages to the processor 110, the memory 120, the communication unit 140, and the peripheral system 160.
The above-described peripheral system (I/O)160 is mainly used to implement an interactive function between the image processing apparatus 100 and a user/external environment, and mainly includes an input-output device of the image processing apparatus 100. In a specific implementation, the peripheral system (I/O)160 may include a plurality of (here, greater than or equal to 2) camera controllers, such as the camera controller 1, the camera controller 2, the camera controller 3, and the like shown in fig. 8. Wherein, each camera controller can be coupled with the peripheral equipment such as camera 1, camera 2 and camera 3 which respectively correspond. In some possible embodiments, when capturing an image, the camera 1 and the camera 2 are located on a horizontal line, the camera 1 may be a color camera, and the camera 2 may be a black-and-white camera. In practice, the peripheral system (I/O)160 may also include other I/O peripherals, and is not limited herein.
in some possible embodiments, when an image is captured, the camera 1 and the camera 2 are located on a horizontal line, the camera controller 1 controls the camera 1 to transmit a collected image signal to the ISP, and the ISP processes the received image signal to form a first image. Similarly, the camera controller 2 controls the camera 2 to transmit the acquired image signal to the ISP, and the ISP processes the received image signal to form a second image. The ISP transmits the first image and the second image to the AP for image processing as described in the embodiment of fig. 3. In specific implementation, if the cameras 1 and 2 are arranged left and right on the image processing device, the vertical screen can be used to position the cameras 1 and 2 on a horizontal line during shooting, and if the cameras 1 and 2 are arranged up and down on the image processing device, the horizontal screen can be used to position the cameras 1 and 2 on a horizontal line during shooting, and the physical position relationship between the cameras 1 and 2 on the image processing device is not limited in the embodiment of the present application. The camera controller 1 and the camera controller 2 control the camera 1 and the camera 2 to simultaneously acquire image signals through a synchronization mechanism. In some possible embodiments, the AP or the ISP sends control instructions to the camera controller 1 and the camera controller 2 at the same time, and the control instructions are used for controlling the camera 1 and the camera 2 to acquire image signals at the same time.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.

Claims (24)

1. An image processing method, comprising:
Acquiring a first area in a first image, wherein the first area is an image area corresponding to a target object in the first image;
acquiring a second area in a second image, wherein the shape and the size of the second area are the same as those of the first area, the pixel difference between the second area and the first area is smaller than a pixel difference threshold value, and the second area is an image area corresponding to the target object in the second image;
calculating a horizontal displacement difference between the position of the first area in the first image and the position of the second area in the second image, and determining the distance between the target object and the shooting equipment according to the horizontal displacement difference, wherein the horizontal displacement difference is the parallax between the first area and the second area;
The first image is an image shot by a first camera of the shooting equipment, the second image is an image shot by a second camera of the shooting equipment, and the first camera and the second camera are located on a horizontal line.
2. the method of claim 1, wherein acquiring the first region in the first image comprises:
determining a region formed by a plurality of target pixel points in the first image as the first region, wherein the characteristic value of the target pixel point is within a target threshold range.
3. The method according to claim 2, wherein before determining a region collectively composed of a plurality of target pixels in the first image as the first region, further comprising:
Determining a reference region corresponding to the target object in the first image;
obtaining characteristic values of a plurality of pixel points in the reference region;
And determining the target threshold range for image segmentation according to the characteristic values of a plurality of pixel points in the reference region.
4. The method according to claim 2 or 3, wherein the characteristic value comprises a color value, a gray value or a depth reference value, and the depth reference value is a reference distance between a pixel point and the photographing device.
5. the method of any of claims 1-4, wherein acquiring the second region in the second image comprises:
acquiring one or more mask regions in the second image, each of the one or more mask regions being the same in shape and size as the first region;
And determining the mask area with the pixel difference with the first area smaller than a pixel difference threshold value as the second area according to the pixel difference of each mask area with the first area.
6. The method of claim 5, wherein said acquiring one or more mask regions in the second image comprises:
Determining a mask window corresponding to the first area according to the first area;
respectively leveling the upper edge and the lower edge of the shielding window with the upper edge and the lower edge of the second image and then translating in the horizontal direction to obtain one or more shielding areas in the second image;
Wherein the mask window is the same size and shape as the first image.
7. the method according to claim 5 or 6, wherein the pixel difference between each mask region and the first region is a sum of color differences between each pixel point in each mask region and a corresponding pixel point in the first region.
8. The method according to claim 5 or 6, wherein the pixel difference between each mask region and the first region is a sum of gray level differences of gray levels of each pixel point in each mask region and a corresponding pixel point in the first region.
9. The method according to claim 8, wherein before determining a mask region having a pixel difference from the first region less than a pixel difference threshold as a second region according to a pixel difference of each mask region from the first region, further comprising:
And converting the color value of each pixel point in the first area to obtain the gray value of each pixel point.
10. The method according to any one of claims 1-9, wherein said determining a distance between the target object and a photographing apparatus according to the horizontal displacement difference comprises:
acquiring a shooting focal length when the first image and/or the second image is shot;
Acquiring a spacing distance between the first camera and the second camera;
And determining the distance between the target object and the shooting equipment according to the shooting focal length, the spacing distance and the horizontal displacement difference.
11. The method of claim 10, wherein the determining the distance between the target object and the photographing apparatus according to the photographing focal length, the separation distance, and the horizontal displacement difference comprises:
Calculating the product of the shooting focal length and the spacing distance;
And determining the quotient of the product of the shooting focal length and the separation distance and the horizontal displacement difference as the distance between the target object and the shooting device.
12. An image processing apparatus characterized by comprising:
The first acquisition module is used for acquiring a first area in a first image, wherein the first area is an image area corresponding to a target object in the first image;
A second obtaining module, configured to obtain a second region in a second image, where the second region has a same shape and size as the first region, and a pixel difference between the second region and the first region is smaller than a pixel difference threshold, and the second region is an image region corresponding to the target object in the second image;
A calculating module, configured to calculate a horizontal displacement difference between the position of the first region in the first image acquired by the first acquiring module and the position of the second region in the second image acquired by the second acquiring module, where the horizontal displacement difference is used to represent a parallax between the first region and the second region;
The determining module is used for determining the distance between the target object and the shooting device according to the horizontal displacement difference calculated by the calculating module;
The first image is an image shot by a first camera of the shooting equipment, the second image is an image shot by a second camera of the shooting equipment, and the first camera and the second camera are located on a horizontal line.
13. The image processing apparatus according to claim 12, wherein the first obtaining module is specifically configured to determine, as the first region, a region that is jointly composed of a plurality of target pixels in the first image, where feature values of the target pixels are within a target threshold range.
14. The image processing apparatus of claim 13, wherein the first obtaining module is further configured to:
determining a reference region corresponding to the target object in the first image;
Obtaining characteristic values of a plurality of pixel points in the reference region;
And determining a target threshold range for image segmentation according to the characteristic values of a plurality of pixel points in the reference region.
15. The apparatus according to claim 13 or 14, wherein the feature value includes a color value, a grayscale value, or a depth reference value, and the depth reference value is used to represent a reference distance between a pixel point and the photographing device.
16. The image processing apparatus according to any one of claims 12 to 15, wherein the second obtaining module is specifically configured to:
acquiring one or more mask regions in the second image, each mask region in the plurality of mask regions being the same in shape and size as the first region;
And determining the mask area with the pixel difference with the first area smaller than a pixel difference threshold value as the second area according to the pixel difference of each mask area with the first area.
17. The image processing apparatus according to claim 16, wherein the second obtaining module is specifically configured to:
Determining a mask window corresponding to the first area according to the first area;
Respectively leveling the upper edge and the lower edge of the shielding window with the upper edge and the lower edge of the second image and then translating to obtain the one or more shielding areas in the second image;
wherein the mask window is the same size and shape as the first image.
18. The image processing device according to claim 16 or 17, wherein the pixel difference between each mask region and the first region is a sum of color differences between each pixel point in each mask region and a corresponding pixel point in the first region.
19. The image processing device according to claim 16 or 17, wherein the pixel difference between each mask region and the first region is a sum of gray level differences of gray levels of each pixel point in each mask region and a corresponding pixel point in the first region.
20. The image processing apparatus according to claim 19, characterized by further comprising:
And the conversion module is used for converting the color value of each pixel point in the first area acquired by the first acquisition module to obtain the gray value of each pixel point.
21. the image processing apparatus according to any of claims 12 to 20, wherein the determining module is specifically configured to:
Acquiring a shooting focal length when the first image and/or the second image is shot;
acquiring a spacing distance between the first camera and the second camera;
And determining the distance between the target object and the shooting equipment according to the shooting focal length, the spacing distance and the horizontal displacement difference.
22. The image processing apparatus according to claim 21, wherein the determining module is specifically configured to:
Calculating the product of the shooting focal length and the spacing distance;
And determining the quotient of the product of the shooting focal length and the separation distance and the horizontal displacement difference as the distance between the target object and the shooting device.
23. An image processing apparatus comprising a processor, a memory, wherein the memory is configured to store program code, and the processor is configured to invoke the program code, and when the program code is executed, the processor is configured to perform the steps of any of claims 1-11.
24. A computer storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method according to any one of claims 1-11.
CN201810562100.1A 2018-05-31 2018-05-31 Image processing method and device Active CN110555874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810562100.1A CN110555874B (en) 2018-05-31 2018-05-31 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810562100.1A CN110555874B (en) 2018-05-31 2018-05-31 Image processing method and device

Publications (2)

Publication Number Publication Date
CN110555874A true CN110555874A (en) 2019-12-10
CN110555874B CN110555874B (en) 2023-03-10

Family

ID=68735473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810562100.1A Active CN110555874B (en) 2018-05-31 2018-05-31 Image processing method and device

Country Status (1)

Country Link
CN (1) CN110555874B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314686A (en) * 2020-03-20 2020-06-19 深圳市博盛医疗科技有限公司 Method, system and medium for automatically optimizing 3D (three-dimensional) stereoscopic impression
WO2021146978A1 (en) * 2020-01-22 2021-07-29 华为技术有限公司 Display system, graphics processing unit (gpu), display controller, and display method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment
US20140294289A1 (en) * 2013-03-29 2014-10-02 Sony Computer Entertainment Inc. Image processing apparatus and image processing method
CN105491277A (en) * 2014-09-15 2016-04-13 联想(北京)有限公司 Image processing method and electronic equipment
CN105825494A (en) * 2015-08-31 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN106291521A (en) * 2016-07-29 2017-01-04 广东欧珀移动通信有限公司 Distance-finding method, device and the mobile terminal moved based on MEMS

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment
US20140294289A1 (en) * 2013-03-29 2014-10-02 Sony Computer Entertainment Inc. Image processing apparatus and image processing method
CN105491277A (en) * 2014-09-15 2016-04-13 联想(北京)有限公司 Image processing method and electronic equipment
CN105825494A (en) * 2015-08-31 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN106291521A (en) * 2016-07-29 2017-01-04 广东欧珀移动通信有限公司 Distance-finding method, device and the mobile terminal moved based on MEMS

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021146978A1 (en) * 2020-01-22 2021-07-29 华为技术有限公司 Display system, graphics processing unit (gpu), display controller, and display method
CN111314686A (en) * 2020-03-20 2020-06-19 深圳市博盛医疗科技有限公司 Method, system and medium for automatically optimizing 3D (three-dimensional) stereoscopic impression
CN111314686B (en) * 2020-03-20 2021-06-25 深圳市博盛医疗科技有限公司 Method, system and medium for automatically optimizing 3D (three-dimensional) stereoscopic impression

Also Published As

Publication number Publication date
CN110555874B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
US9886774B2 (en) Photogrammetric methods and devices related thereto
EP3251090B1 (en) Occlusion handling for computer vision
EP2992508B1 (en) Diminished and mediated reality effects from reconstruction
US7554575B2 (en) Fast imaging system calibration
EP3189495B1 (en) Method and apparatus for efficient depth image transformation
JP2017520050A (en) Local adaptive histogram flattening
KR101903619B1 (en) Structured stereo
WO2014200625A1 (en) Systems and methods for feature-based tracking
US10825249B2 (en) Method and device for blurring a virtual object in a video
CN111080687A (en) Method and apparatus for active depth sensing and calibration method thereof
WO2020119467A1 (en) High-precision dense depth image generation method and device
CN110276774B (en) Object drawing method, device, terminal and computer-readable storage medium
US11042984B2 (en) Systems and methods for providing image depth information
CN109247068A (en) Method and apparatus for rolling shutter compensation
US20120162387A1 (en) Imaging parameter acquisition apparatus, imaging parameter acquisition method and storage medium
CN113052919A (en) Calibration method and device of visual sensor, electronic equipment and storage medium
CN108604374A (en) A kind of image detecting method and terminal
CN111882655B (en) Method, device, system, computer equipment and storage medium for three-dimensional reconstruction
CN110555874B (en) Image processing method and device
US10154241B2 (en) Depth map based perspective correction in digital photos
CN113112415A (en) Target automatic identification method and device for image measurement of total station
CN111354037A (en) Positioning method and system
CN113610702A (en) Picture construction method and device, electronic equipment and storage medium
KR20220058846A (en) Robot positioning method and apparatus, apparatus, storage medium
CN110349196B (en) Depth fusion method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant