CN110326028A - Method, apparatus, computer system and the movable equipment of image procossing - Google Patents

Method, apparatus, computer system and the movable equipment of image procossing Download PDF

Info

Publication number
CN110326028A
CN110326028A CN201880012612.9A CN201880012612A CN110326028A CN 110326028 A CN110326028 A CN 110326028A CN 201880012612 A CN201880012612 A CN 201880012612A CN 110326028 A CN110326028 A CN 110326028A
Authority
CN
China
Prior art keywords
image
block
frequency component
unit
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880012612.9A
Other languages
Chinese (zh)
Inventor
周游
朱振宇
杜劼熹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN110326028A publication Critical patent/CN110326028A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/536Depth or shape recovery from perspective effects, e.g. by using vanishing points
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

Disclose the method, apparatus, computer system and movable equipment of a kind of image procossing.This method comprises: determining the occlusion area in the first image and the second image, wherein the first image and second image are in the image mutually obtained in the same time with different angle photographic subjects scene;The pixel being in occlusion area in the first image and second image is replaced into random point;According to displacement random point after the first image and second image, obtain the depth information of the target scene.The technical solution of the embodiment of the present invention can be improved the accuracy for obtaining depth information.

Description

Image processing method, device, computer system and mobile equipment
Copyright declaration
The disclosure of this patent document contains material which is subject to copyright protection. The copyright is owned by the copyright owner. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office official records and records.
Technical Field
The present invention relates to the field of information technology, and more particularly, to a method, an apparatus, a computer system, and a removable device for image processing.
Background
Humans are entering the information age and computers will be increasingly widespread into almost all fields. As an important field of intelligent computing, computer vision has been greatly developed and applied. Computer Vision relies on an imaging System to replace a visual organ as an input sensitive means, most commonly a camera, and a basic visual System can be formed by two cameras, which is called a binocular Vision System.
The binocular camera system shoots two pictures at different angles at the same time through the two cameras, and then the distance relation between a scene and the cameras can be calculated by utilizing the difference of the two pictures and the position and angle relation between the two cameras and the triangular relation, and the distance relation is drawn on a picture, namely a Depth Map (Depth Map). That is, the binocular camera system obtains the depth information of the scene through the difference between two pictures at different angles at the same time. This difference may be due to different angle shots, in which case the calculated scene depth is correct. However, the two cameras may have different imaging, for example, the cameras are blocked or the weather is bad, and the calculated depth information is wrong.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, a computer system and a mobile device, which can improve the accuracy of acquiring depth information.
In a first aspect, a method for image processing is provided, including: determining occlusion areas in a first image and a second image, wherein the first image and the second image are images obtained by shooting a target scene at different angles at the same time; replacing pixel points in the shielding area in the first image and the second image with random points; and acquiring the depth information of the target scene according to the first image and the second image after the random points are replaced.
In a second aspect, a method of image processing is provided, comprising: segmenting a first image and a second image into blocks, wherein the first image and the second image are images obtained by shooting a target scene at different angles at the same time; detecting the matching degree of the blocks corresponding to the first image and the second image; when the cost corresponding to the matching degree of a first block in the first image and a corresponding second block in the second image is greater than a cost threshold value, removing high-frequency component details in the first block and the second block; and acquiring the depth information of the target scene according to the first block and the second block after the high-frequency component details are removed.
In a third aspect, there is provided an apparatus for image processing, comprising: the device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for determining an occlusion area in a first image and a second image, and the first image and the second image are images obtained by shooting a target scene at different angles at the same time; a processing unit, configured to replace pixel points in the first image and the second image in the occlusion region with random points; and the obtaining unit is used for obtaining the depth information of the target scene according to the first image and the second image after the random points are replaced.
In a fourth aspect, there is provided an apparatus for image processing, comprising: the segmentation unit is used for segmenting a first image and a second image into segments, wherein the first image and the second image are images obtained by shooting a target scene at different angles at the same time; the detection unit is used for detecting the matching degree of the blocks corresponding to the first image and the second image; the processing unit is used for removing high-frequency component details in a first block and a second block in the second image when the cost corresponding to the matching degree of the first block in the first image and the second block in the second image is larger than a cost threshold; and the acquisition unit is used for acquiring the depth information of the target scene according to the first block and the second block after the high-frequency component details are removed.
In a fifth aspect, there is provided a computer system comprising: a memory for storing computer executable instructions; a processor for accessing the memory and executing the computer-executable instructions to perform operations in the method of the first or second aspect.
In a sixth aspect, there is provided a mobile device comprising: an image processing apparatus according to the third or fourth aspect; alternatively, the computer system of the fifth aspect described above.
In a seventh aspect, a computer storage medium is provided, in which program code is stored, and the program code can be used to instruct the execution of the method of the first or second aspect.
According to the technical scheme of the embodiment of the invention, the depth information is calculated after the pixel points in the shielding area are replaced by the random points, so that the obtained depth information is more accurate than the depth information obtained by directly adopting the shielded pixel points, and the accuracy of obtaining the depth information can be improved.
Drawings
Fig. 1 is an architecture diagram of a solution to which an embodiment of the invention is applied.
Fig. 2 is a schematic architecture diagram of a mobile device of an embodiment of the present invention.
FIG. 3 is a schematic flow diagram of a method of image processing of one embodiment of the present invention.
Fig. 4 is a schematic flow chart of a method of image processing of another embodiment of the present invention.
Fig. 5 is a flowchart illustrating details of removing high frequency components according to an embodiment of the present invention.
FIG. 6 is a schematic block diagram of an apparatus for image processing according to an embodiment of the present invention
Fig. 7 is a schematic block diagram of an apparatus for image processing of another embodiment of the present invention.
FIG. 8 is a schematic block diagram of a computer system of an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be described below with reference to the accompanying drawings.
It should be understood that the specific examples are included merely as a prelude to a better understanding of the embodiments of the present invention for those skilled in the art and are not intended to limit the scope of the embodiments of the present invention.
It should also be understood that the formula in the embodiment of the present invention is only an example, and is not intended to limit the scope of the embodiment of the present invention, and the formula may be modified, and the modifications should also fall within the protection scope of the present invention.
It should also be understood that, in various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should also be understood that the various embodiments described in this specification may be implemented alone or in combination, and are not limited to the embodiments of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 is an architecture diagram of a solution to which an embodiment of the invention is applied.
As shown in fig. 1, the system 100 may receive an image 102 to be processed, and process the image 102 to be processed to obtain a processing result 108. For example, the system 100 may receive two images captured by a binocular camera system and process the two images to obtain depth information. In some embodiments, the components in system 100 may be implemented by one or more processors, which may be processors in a computing device or in a mobile device (e.g., a drone). The processor may be any kind of processor, which is not limited in this embodiment of the present invention. One or more memories may also be included in the system 100. The memory may be used to store instructions and data, such as computer-executable instructions to implement aspects of embodiments of the invention, the image to be processed 102, the processing results 108, and so forth. The memory may be any kind of memory, which is not limited in this embodiment of the present invention.
The technical scheme of the embodiment of the invention can be applied to electronic equipment with double cameras or multiple cameras, such as movable equipment, VR/AR glasses or a double-camera mobile phone. The movable device may be an unmanned aerial vehicle, an unmanned ship, an autonomous vehicle, a robot, an aerial vehicle, or the like, but the embodiment of the present invention is not limited thereto.
FIG. 2 is a schematic architectural diagram of a removable device 200 of one embodiment of the present invention.
As shown in FIG. 2, the mobile device 200 may include a power system 210, a control system 220, a sensing system 230, and a processing system 240.
The power system 210 is used to power the mobile device 200.
Taking an unmanned aerial vehicle as an example, a power system of the unmanned aerial vehicle may include an electronic governor (abbreviated as an electronic governor), a propeller, and a motor corresponding to the propeller. The motor is connected between the electronic speed regulator and the propeller, and the motor and the propeller are arranged on the corresponding machine arm; the electronic speed regulator is used for receiving a driving signal generated by the control system and providing a driving current for the motor according to the driving signal so as to control the rotating speed of the motor. The motor is used for driving the propeller to rotate, so that power is provided for the flight of the unmanned aerial vehicle.
The sensing system 230 may be used to measure attitude information of the mobile device 200, i.e., position information and state information of the mobile device 200 in space, such as a three-dimensional position, a three-dimensional angle, a three-dimensional velocity, a three-dimensional acceleration, a three-dimensional angular velocity, and the like. The sensing System 230 may include at least one of a gyroscope, an electronic compass, an Inertial Measurement Unit (IMU), a vision sensor, a Global Positioning System (GPS), a barometer, an airspeed meter, and the like.
In an embodiment of the present invention, the sensing system 230 may also be used for capturing images, i.e. the sensing system 230 comprises a sensor, such as a camera or the like, for capturing images.
The control system 220 is used to control the movement of the mobile device 200. The control system 220 may control the removable device 200 according to preset program instructions. For example, the control system 220 may control movement of the mobile device 200 based on the attitude information of the mobile device 200 measured by the sensing system 230. The control system 220 may also control the removable device 200 based on control signals from a remote control. For example, for a drone, the control system 220 may be a flight control system (flight control), or a control circuit in the flight control.
The processing system 240 may process the images acquired by the sensing system 230. For example, the Processing system 240 may be an Image Signal Processing (ISP) chip.
The processing system 240 may be the system 100 of fig. 1, or the processing system 240 may include the system 100 of fig. 1.
It should be understood that the above-described division and naming of the various components of the removable device 200 is merely exemplary and should not be construed as a limitation of the embodiments of the present invention.
It should also be understood that removable device 200 may also include other components not shown in FIG. 2, as embodiments of the invention are not limited in this respect.
FIG. 3 shows a schematic flow chart of a method 300 of processing an image according to one embodiment of the invention. The method 300 may be applied to scenes where the shot (camera) may be occluded. The method 300 may be performed by the system 100 shown in FIG. 1; or by the removable device 200 shown in figure 2. In particular, when executed by the removable device 200, may be executed by the processing system 240 of FIG. 2.
And 310, determining an occlusion region in a first image and a second image, wherein the first image and the second image are images obtained by shooting a target scene at different angles at the same time.
In the case where the photographing lens may be occluded, images (the first image and the second image) obtained by photographing the target scene at different angles at the same time may have an occluded area. In the embodiment of the invention, before the first image and the second image are used for acquiring the depth information, the occlusion areas in the first image and the second image are judged.
Optionally, the image may be partitioned first, that is, a high-resolution large image is partitioned into low-resolution small images for processing, so that the computing resources may be further saved, parallel processing is facilitated, and the limitation of hardware is reduced.
Therefore, for a first image and a second image, the first image and the second image may be first sliced into blocks; then detecting the matching degree of the blocks corresponding to the first image and the second image; and then determining the occlusion areas in the first image and the second image according to the matching degree of the blocks corresponding to the first image and the second image.
For example, for each Patch (Patch) of the first image and the second image, a Semi-global Matching (SGM) algorithm may be used to detect a Matching degree, and if the Matching degree is low, that is, the cost is high, it may be determined that the Patch is in the occlusion region.
Optionally, the occlusion regions in the first image and the second image may also be determined according to the sources of the first image and the second image.
In particular, the origin of the image may determine that certain areas of the image are occluded areas, e.g., for a drone, the camera may be occluded by a propeller boot, a propeller, a foot stand, a pan and tilt head, and thus certain areas of the image captured by the camera may be occluded areas. Thus, depending on the source of the image, e.g., the image is from an anterior/posterior/downward view sensor, etc., occlusion regions in the image can be determined.
Optionally, the occlusion region may also be pre-calibrated. That is, for the vision sensors (cameras) at different positions, the occlusion areas of the images taken by the vision sensors can be calibrated in advance, so that the occlusion areas in the first image and the second image can be determined according to the calibration in advance. For example, it is possible to mark a certain area of the binocular image as being occluded, and it is not changed. For example, for an area shielded after the propeller protection cover is installed, once the propeller protection cover is detected to be installed, the area can be directly determined as a shielded area, and pixel points in the area are set as random points.
And 320, replacing the pixel points in the shielding areas in the first image and the second image with random points.
For the occlusion region in the image, in the embodiment of the present invention, a replacement process is performed, that is, a pixel point in the occlusion region in the image is replaced with a random point.
Alternatively, a gaussian distribution of random points or an average distribution of random points may be used, but the embodiment of the present invention is not limited thereto.
Alternatively, when the first image and the second image are divided into blocks, pixel points in the blocks in the occlusion region in the first image and the second image may be replaced with random points.
And 330, acquiring the depth information of the target scene according to the first image and the second image after the random points are replaced.
In the embodiment of the present invention, after replacing the pixel points in the occlusion region in the first image and the second image with random points, the depth information is calculated.
Since the random points cannot be matched, the Cost (Cost) of point correspondence in the occlusion region is naturally large, and thus, when the depth information is calculated, the depth information of the current point in the occlusion region is calculated by using the surrounding points.
In other words, since the occlusion conditions of the two cameras are not consistent, if the depth information is calculated by directly using the captured image, a large error may occur. And after replacing the points in the shielding area with random points, when calculating the depth information, points around the shielding area are adopted to calculate the depth information of the points in the shielding area, and due to the continuity of the features in the image, the depth information obtained by adopting the surrounding points is more accurate than the depth information obtained by directly adopting the shielded points.
Alternatively, the depth information may be acquired using an SGM algorithm.
In the SGM algorithm, for a pixel p, when its disparity is d, the cost is:
where r represents a direction, for example, there may be 8 directions of left-right, right-left, up-down, down-up, left-up-right-down, right-down-left-up, right-up-left-down, left-down-right-up.
Lr(p, d) represents the minimum cost value along the current direction r when the disparity of the current pixel p takes on the value d.
Taking the current direction r as an example from left to right, LrThe second term in (p, d) is the minimum value chosen from the 4 possible candidate values:
1. when the disparity of the previous pixel (left adjacent pixel) is d, the minimum cost value is obtained;
2. when the disparity value of the previous pixel (left adjacent pixel) is d-1, the minimum cost value + penalty coefficient P1;
3. when the disparity of the previous pixel (left adjacent pixel) is d +1, the minimum cost value + penalty coefficient P1;
4. when the disparity of the previous pixel (left neighboring pixel) is equal to the other pixel, the minimum cost value + the penalty factor P2 is obtained.
In addition, LrThe smallest cost when the previous pixel takes a different disparity value also needs to be subtracted from (p, d). This is because Lr(p, d) increases with the right shift of the current pixel, and is maintained at a smaller value to prevent overflow.
C(p,d)=min(d(p,p-d,IL,IR),d(p-d,p,IR,IL))
That is, after half-pixel interpolation is performed between the current pixel p and the pixel q after d is moved, the minimum value of the gray level or the RGB difference of the two pixels is found and used as the value of C (p, d).
Specifically, the method comprises the following steps: let the gray level/RGB value of the pixel p be I (p), first select the value I (q) with the smallest difference, i.e. d (p, p-d), from the three values I (p), (I (p)) + I (p-1))/2, (I (p)) + I (p + 1))/2. Then, the value with the smallest difference with I (p) is selected from the three values of I (q), (I (q)) + I (q-1))/2, (I (q)) + I (q +1))/2, namely d (p-d, p). Finally, the minimum value of the two values is selected, namely C (p, d).
After replacing points in the occlusion region with random points, the final reflection on the SGM results is to estimate the depth information of the current occlusion region using the preamble points on the Path (Path).
Optionally, in an embodiment of the present invention, after removing the pixels replaced with the random points, the fitness parameter of the matching process between the first image and the second image may be calculated.
For example, taking the SGM algorithm as an example, each point p in the full map is in each direction r, and finally the sum of costs of the parallaxes d is selected as an adaptation parameter of the whole matching process:
since C (p, d) of a pixel set as a random point is a large number, the adaptation parameter is caused
The number has a large deviation, so when the fitness parameter of the whole matching process is calculated, the pixels are removed, namely the pixels do not participate in the calculation.
Optionally, if the fitness parameter is smaller than a fitness preset value, the depth information is used.
That is, if the result obtained according to the above formula is good, i.e., ΣpAnd if the S (p, d) is smaller than the preset value of the adaptation degree, the repair is considered to be successful, and the repaired depth map can be used for other applications, such as obstacle avoidance navigation.
According to the technical scheme of the embodiment of the invention, the depth information is calculated after the pixel points in the shielding area are replaced by the random points, so that the obtained depth information is more accurate than the depth information obtained by directly adopting the shielded pixel points, and the accuracy of obtaining the depth information can be improved.
In the above, the processing method for the image with the blocked area in the binocular camera system is described, and in addition to the case with the blocked area, the difference between the two images obtained by the binocular camera system may be caused by the bad weather, and the like, and the image processing method for the case is described below.
FIG. 4 shows a schematic flow chart of a method 400 of processing an image according to another embodiment of the invention. The method 400 may be applied to situations where high frequency component details are present in the image. For example, the high frequency component detail may include at least one of raindrops, snowflakes, fog drops, or noise. The method 400 may be performed by the system 100 shown in FIG. 1; or by the removable device 200 shown in figure 2. In particular, when executed by the removable device 200, may be executed by the processing system 240 of FIG. 2.
And 410, segmenting a first image and a second image into blocks, wherein the first image and the second image are images obtained by shooting a target scene at different angles at the same time.
In the embodiment of the invention, when the images (the first image and the second image) obtained by shooting the target scene at different angles at the same time are processed, the images are firstly partitioned, namely, a high-resolution large image is partitioned into low-resolution small images for processing, so that the computing resources can be further saved, the parallel processing is facilitated, and the limitation of hardware is reduced.
And 420, detecting the matching degree of the blocks corresponding to the first image and the second image.
After the image is divided into blocks, the matching degree is detected for each block. For example, for each block of the first image and the second image, the matching degree may be detected using the SGM algorithm.
430, when the cost corresponding to the matching degree of the first block in the first image and the corresponding second block in the second image is greater than a cost threshold, removing the details of the high-frequency component in the first block and the second block.
And performing corresponding processing according to the detected block matching degree. If the cost corresponding to the matching degree is greater than the cost threshold, that is, the matching degree is poor, it indicates that the block may include details of the high frequency component, and in this case, the processing of removing the details of the high frequency component in the first block and the second block is performed.
Optionally, the details of the high-frequency component in the image to be processed may be removed in the following manner, where the image to be processed represents the first partition or the second partition:
carrying out low-pass filtering processing on the image to be processed to obtain a reference image;
subtracting the reference image from the image to be processed to obtain a detail image;
obtaining a negative residual image according to the detail image and a depth detail neural network, wherein the depth detail neural network is obtained by training a sample including the high-frequency component details and a sample not including the high-frequency component details;
and adding the image to be processed and the negative residual image to obtain an image with the details of the high-frequency components removed.
Specifically, in the embodiment of the present invention, a Deep detail neural network (Deep detail network) may be used to remove details of the high frequency component. The deep detail neural network may be trained by first including samples of the high frequency component details and samples of the high frequency component details.
For example, the deep detail neural network may be trained by an objective function (objective function) as follows:
where N is the number of training samples, the function f (-) is the residual error network ResNet, W, b is the parameter to be trained and learned, and XiIs an image with details of high frequency components, YiIs an image without details of high frequency components, (Y)i-Xi) That is, details of high frequency components in the image, where f (X) needs to be learnedi,detailW, b) approaches (Y)i-Xi)。Xi,detailIs XiA detail image of (2) canTo be obtained by the following decomposition process:
Xdetail=X-Xbase
the original image X may be filtered by a low-pass filter (filtered) to obtain a reference image X with high frequency components removedbaseThen, the reference image X after low-pass filtering is subtracted from the original image XbaseTo obtain a detail image Xdetail. Since raindrops, fogdrops, snowflakes and the like are high-frequency components in the image basically (because rain, snow and fog cannot be seen in a picture by only one or two drops and generally appear repeatedly in many places of the whole image), a rough detail image X containing details of the high-frequency components can be obtained preliminarily by the methoddetail
For sample XiX can be obtained by the above decomposition processi,detailAnd then the deep detail neural network is obtained through the training of the target function.
For the image to be processed including the details of the high frequency components, X can be obtained first through the decomposition processdetailThen inputting the image into the depth detail neural network to obtain an output negative residual image, and adding the image to be processed and the negative residual image to obtain an image with high-frequency component details removed.
FIG. 5 is a flowchart illustrating details of removing high frequency components according to an embodiment of the present invention. The decomposition process 502 in fig. 5 may be the decomposition process described above, and the depth-detail neural network 504 may be a depth-detail neural network trained in the manner described above.
As shown in fig. 5, the image 501 to be processed obtains a detail image 503 through a decomposition process 502, the detail image 503 is used as an input of a depth detail neural network 504, a negative residual image 505 is obtained through the depth detail neural network 504, and the image 501 to be processed and the negative residual image 505 are added to obtain a final output, that is, the image 506 with high-frequency component details removed.
And 440, obtaining the depth information of the target scene according to the first block and the second block from which the details of the high-frequency component are removed.
In the embodiment of the invention, after the details of the high-frequency components of the first block and the second block are removed, the depth information is calculated. The high-frequency component details may cause the difference between the two blocks, so that the accuracy of the depth information is affected, and after the high-frequency component details are removed, the accuracy of the depth information which can be obtained by calculating the depth information is calculated.
Optionally, when a cost corresponding to a matching degree of a third block in the first image and a corresponding fourth block in the second image is not greater than the cost threshold, obtaining depth information of the target scene according to the third block and the fourth block.
That is, if the cost corresponding to the matching degree is not greater than the cost threshold, that is, the matching degree is relatively high, it indicates that the high-frequency component details are not included in the partition or the influence of the high-frequency component details is not large, and in this case, the depth information may be directly calculated according to the original partition.
Optionally, in an embodiment of the present invention, an adaptation parameter of a matching process of the first image and the second image after removing the details of the high-frequency component may also be calculated; and if the adaptation degree parameter is smaller than the adaptation degree preset value, using the depth information.
Specifically, after removing the details of the high frequency component, whether the image is successfully repaired may be determined according to the fitness parameter. If the fitness parameter is smaller than the fitness preset value, the repair is considered to be successful, and the repaired depth map can be used for other applications, such as obstacle avoidance navigation.
According to the technical scheme of the embodiment of the invention, the depth information is calculated after the high-frequency component details are removed, so that the obtained depth information is more accurate than the depth information obtained without removing the high-frequency component details, and the accuracy of obtaining the depth information can be improved.
Having described the method of image processing of the embodiments of the present invention in detail above, an apparatus, a computer system, and a removable device of image processing of the embodiments of the present invention will be described below.
Fig. 6 shows a schematic block diagram of an apparatus 600 for image processing according to an embodiment of the present invention. The apparatus 600 may perform the method 300 of image processing of the embodiment of the present invention described above.
As shown in fig. 6, the apparatus 600 may include:
a determining unit 610, configured to determine an occlusion region in a first image and a second image, where the first image and the second image are images obtained by shooting a target scene at different angles at the same time;
a processing unit 620, configured to replace pixel points in the occlusion region in the first image and the second image with random points;
an obtaining unit 630, configured to obtain depth information of the target scene according to the first image and the second image after replacing the random point.
Optionally, the apparatus 600 further comprises:
a segmentation unit 640 configured to segment the first image and the second image into blocks;
the processing unit 620 is specifically configured to:
and replacing pixel points in the first image and the second image in the blocks in the occlusion region with random points.
Optionally, the determining unit 610 is specifically configured to:
detecting the matching degree of the blocks corresponding to the first image and the second image;
and determining an occlusion area in the first image and the second image according to the matching degree of the blocks corresponding to the first image and the second image.
Optionally, the determining unit 610 is specifically configured to:
determining occlusion regions in the first image and the second image according to the sources of the first image and the second image.
Optionally, the determining unit 610 is specifically configured to:
and determining occlusion areas in the first image and the second image according to the pre-calibration.
Optionally, the random points are gaussian distributed random points or evenly distributed random points.
Optionally, the apparatus 600 further comprises:
a calculating unit 650, configured to calculate an adaptation parameter of a matching process between the first image and the second image after removing the pixels replaced with the random points.
Optionally, the apparatus 600 further comprises:
an applying unit 660, configured to use the depth information when the adaptation degree parameter is smaller than an adaptation degree predetermined value.
Fig. 7 shows a schematic block diagram of an apparatus 700 for image processing according to an embodiment of the present invention. The apparatus 700 may perform the method 400 of image processing of the embodiment of the present invention described above.
As shown in fig. 7, the apparatus 700 may include:
a segmentation unit 710, configured to segment a first image and a second image into segments, where the first image and the second image are images obtained by shooting a target scene at different angles at the same time;
a detecting unit 720, configured to detect matching degrees of the blocks corresponding to the first image and the second image;
the processing unit 730, configured to remove details of high-frequency components in a first partition and a second partition in the second image when a cost corresponding to a matching degree of the first partition in the first image and the second partition in the second image is greater than a cost threshold;
the obtaining unit 740 obtains the depth information of the target scene according to the first partition and the second partition from which the details of the high-frequency component are removed.
Optionally, the processing unit 730 is specifically configured to:
removing the high-frequency component details in an image to be processed in the following manner, wherein the image to be processed represents the first partition or the second partition:
carrying out low-pass filtering processing on the image to be processed to obtain a reference image;
subtracting the reference image from the image to be processed to obtain a detail image;
obtaining a negative residual image according to the detail image and a depth detail neural network, wherein the depth detail neural network is obtained by training a sample including the high-frequency component details and a sample not including the high-frequency component details;
and adding the image to be processed and the negative residual image to obtain an image with the details of the high-frequency components removed.
Optionally, the high frequency component detail includes at least one of raindrops, snowflakes, fog drops, or noise.
Optionally, the obtaining unit 740 is further configured to:
and when the cost corresponding to the matching degree of a third block in the first image and a corresponding fourth block in the second image is not larger than the cost threshold, acquiring the depth information of the target scene according to the third block and the fourth block.
Optionally, the apparatus 700 further comprises:
a calculating unit 750, configured to calculate an adaptation parameter of the matching process of the first image and the second image after the details of the high-frequency component are removed;
an applying unit 760 for using the depth information when the adaptation degree parameter is smaller than an adaptation degree predetermined value.
It should be understood that the apparatus for image processing according to the above embodiment of the present invention may be a chip, which may be specifically implemented by a circuit, and a processor may employ the chip, but the embodiment of the present invention is not limited to a specific implementation form.
FIG. 8 shows a schematic block diagram of a computer system 800 of an embodiment of the invention.
As shown in fig. 8, the computer system 800 may include a processor 810 and a memory 820.
It should be understood that the computer system 800 may also include other components commonly included in computer systems, such as input/output devices, communication interfaces, etc., which are not limited by the embodiments of the present invention.
The memory 820 is used to store computer executable instructions.
The Memory 820 may be various types of memories, and may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory, for example, which is not limited in this embodiment of the present invention.
The processor 810 is configured to access the memory 820 and execute the computer-executable instructions to perform the operations in the methods of image processing of various embodiments of the present invention described above.
The processor 810 may include a microprocessor, a Field-Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and the like, which are not limited in the embodiments of the present invention.
The embodiment of the invention also provides a mobile device, which can comprise the image processing device or the computer system of the various embodiments of the invention.
The image processing apparatus, the computer system, and the mobile device according to the embodiments of the present invention may correspond to an execution main body of the image processing method according to the embodiments of the present invention, and the above and other operations and/or functions of each module in the image processing apparatus, the computer system, and the mobile device are respectively for implementing corresponding flows of the foregoing methods, and are not described herein again for brevity.
Embodiments of the present invention also provide a computer storage medium having a program code stored therein, where the program code may be used to instruct a method for performing image processing according to the above-described embodiments of the present invention.
It should be understood that, in the embodiment of the present invention, the term "and/or" is only one kind of association relation describing an associated object, and means that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (28)

  1. A method of image processing, comprising:
    determining occlusion areas in a first image and a second image, wherein the first image and the second image are images obtained by shooting a target scene at different angles at the same time;
    replacing pixel points in the shielding area in the first image and the second image with random points;
    and acquiring the depth information of the target scene according to the first image and the second image after the random points are replaced.
  2. The method of claim 1, further comprising:
    segmenting the first image and the second image into blocks;
    the replacing of the pixel points in the occlusion region in the first image and the second image with random points includes:
    and replacing pixel points in the blocks in the shielding areas in the first image and the second image with random points.
  3. The method of claim 2, wherein determining occlusion regions in the first and second images comprises:
    detecting the matching degree of the blocks corresponding to the first image and the second image;
    and determining an occlusion area in the first image and the second image according to the matching degree of the blocks corresponding to the first image and the second image.
  4. The method of claim 1 or 2, wherein determining occlusion regions in the first and second images comprises:
    determining occlusion regions in the first image and the second image according to the sources of the first image and the second image.
  5. The method of claim 1 or 2, wherein determining occlusion regions in the first and second images comprises:
    and determining occlusion areas in the first image and the second image according to the pre-calibration.
  6. The method according to any one of claims 1 to 5, wherein the random points are Gaussian distributed random points or evenly distributed random points.
  7. The method according to any one of claims 1 to 6, further comprising:
    and after removing the pixels replaced by the random points, calculating the adaptation degree parameter of the matching process of the first image and the second image.
  8. The method of claim 7, further comprising:
    and if the adaptation degree parameter is smaller than the adaptation degree preset value, using the depth information.
  9. A method of image processing, comprising:
    segmenting a first image and a second image into blocks, wherein the first image and the second image are images obtained by shooting a target scene at different angles at the same time;
    detecting the matching degree of the blocks corresponding to the first image and the second image;
    when the cost corresponding to the matching degree of a first block in the first image and a corresponding second block in the second image is greater than a cost threshold value, removing high-frequency component details in the first block and the second block;
    and acquiring the depth information of the target scene according to the first block and the second block after the high-frequency component details are removed.
  10. The method of claim 9, wherein the removing high frequency component details in the first partition and the second partition comprises:
    removing the high-frequency component details in an image to be processed in the following manner, wherein the image to be processed represents the first partition or the second partition:
    carrying out low-pass filtering processing on the image to be processed to obtain a reference image;
    subtracting the reference image from the image to be processed to obtain a detail image;
    obtaining a negative residual image according to the detail image and a depth detail neural network, wherein the depth detail neural network is obtained by training a sample including the high-frequency component details and a sample not including the high-frequency component details;
    and adding the image to be processed and the negative residual image to obtain an image with the details of the high-frequency components removed.
  11. The method of claim 9 or 10, wherein the high frequency component details comprise at least one of raindrops, snowflakes, fog drops, or noise.
  12. The method according to any one of claims 9 to 11, further comprising:
    and when the cost corresponding to the matching degree of a third block in the first image and a corresponding fourth block in the second image is not larger than the cost threshold, acquiring the depth information of the target scene according to the third block and the fourth block.
  13. The method according to any one of claims 9 to 12, further comprising:
    calculating an adaptation parameter of the matching process of the first image and the second image after the high-frequency component details are removed;
    and if the adaptation degree parameter is smaller than the adaptation degree preset value, using the depth information.
  14. An apparatus for image processing, comprising:
    the device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for determining an occlusion area in a first image and a second image, and the first image and the second image are images obtained by shooting a target scene at different angles at the same time;
    a processing unit, configured to replace pixel points in the first image and the second image in the occlusion region with random points;
    and the obtaining unit is used for obtaining the depth information of the target scene according to the first image and the second image after the random points are replaced.
  15. The apparatus of claim 14, further comprising:
    a segmentation unit configured to segment the first image and the second image into blocks;
    the processing unit is specifically configured to:
    and replacing pixel points in the blocks in the shielding areas in the first image and the second image with random points.
  16. The apparatus according to claim 15, wherein the determining unit is specifically configured to:
    detecting the matching degree of the blocks corresponding to the first image and the second image;
    and determining an occlusion area in the first image and the second image according to the matching degree of the blocks corresponding to the first image and the second image.
  17. The apparatus according to claim 14 or 15, wherein the determining unit is specifically configured to:
    determining occlusion regions in the first image and the second image according to the sources of the first image and the second image.
  18. The apparatus according to claim 14 or 15, wherein the determining unit is specifically configured to:
    and determining occlusion areas in the first image and the second image according to the pre-calibration.
  19. The apparatus according to any one of claims 14 to 18, wherein the random points are gaussian distributed random points or evenly distributed random points.
  20. The apparatus of any one of claims 14 to 19, further comprising:
    and a calculating unit, configured to calculate an adaptation parameter of a matching process between the first image and the second image after removing the pixels replaced with the random points.
  21. The apparatus of claim 20, further comprising:
    an application unit, configured to use the depth information when the fitness parameter is smaller than a fitness predetermined value.
  22. An apparatus for image processing, comprising:
    the segmentation unit is used for segmenting a first image and a second image into segments, wherein the first image and the second image are images obtained by shooting a target scene at different angles at the same time;
    the detection unit is used for detecting the matching degree of the blocks corresponding to the first image and the second image;
    the processing unit is used for removing high-frequency component details in a first block and a second block in the second image when the cost corresponding to the matching degree of the first block in the first image and the second block in the second image is larger than a cost threshold;
    and the acquisition unit is used for acquiring the depth information of the target scene according to the first block and the second block after the high-frequency component details are removed.
  23. The apparatus according to claim 22, wherein the processing unit is specifically configured to:
    removing the high-frequency component details in an image to be processed in the following manner, wherein the image to be processed represents the first partition or the second partition:
    carrying out low-pass filtering processing on the image to be processed to obtain a reference image;
    subtracting the reference image from the image to be processed to obtain a detail image;
    obtaining a negative residual image according to the detail image and a depth detail neural network, wherein the depth detail neural network is obtained by training a sample including the high-frequency component details and a sample not including the high-frequency component details;
    and adding the image to be processed and the negative residual image to obtain an image with the details of the high-frequency components removed.
  24. The apparatus of claim 22 or 23, wherein the high frequency component details comprise at least one of raindrops, snowflakes, fog drops, or noise.
  25. The apparatus according to any one of claims 22 to 24, wherein the obtaining unit is further configured to:
    and when the cost corresponding to the matching degree of a third block in the first image and a corresponding fourth block in the second image is not larger than the cost threshold, acquiring the depth information of the target scene according to the third block and the fourth block.
  26. The apparatus of any one of claims 22 to 25, further comprising:
    the calculating unit is used for calculating the adaptation degree parameter of the matching process of the first image and the second image after the details of the high-frequency components are removed;
    an application unit, configured to use the depth information when the fitness parameter is smaller than a fitness predetermined value.
  27. A computer system, comprising:
    a memory for storing computer executable instructions;
    a processor for accessing the memory and executing the computer-executable instructions to perform operations in the method of any one of claims 1 to 13.
  28. A mobile device, comprising:
    the apparatus of any one of claims 14 to 26; alternatively, the first and second electrodes may be,
    the computer system of claim 27.
CN201880012612.9A 2018-02-08 2018-02-08 Method, apparatus, computer system and the movable equipment of image procossing Pending CN110326028A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/075847 WO2019153196A1 (en) 2018-02-08 2018-02-08 Image processing method and apparatus, computer system and mobile device

Publications (1)

Publication Number Publication Date
CN110326028A true CN110326028A (en) 2019-10-11

Family

ID=67548672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880012612.9A Pending CN110326028A (en) 2018-02-08 2018-02-08 Method, apparatus, computer system and the movable equipment of image procossing

Country Status (2)

Country Link
CN (1) CN110326028A (en)
WO (1) WO2019153196A1 (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6865289B1 (en) * 2000-02-07 2005-03-08 Canon Kabushiki Kaisha Detection and removal of image occlusion errors
US20050286756A1 (en) * 2004-06-25 2005-12-29 Stmicroelectronics, Inc. Segment based image matching method and system
US20100026712A1 (en) * 2008-07-31 2010-02-04 Stmicroelectronics S.R.L. Method and system for video rendering, computer program product therefor
US20100091092A1 (en) * 2008-10-10 2010-04-15 Samsung Electronics Co., Ltd. Image processing apparatus and method
JP2011060216A (en) * 2009-09-14 2011-03-24 Fujifilm Corp Device and method of processing image
US20110115886A1 (en) * 2009-11-18 2011-05-19 The Board Of Trustees Of The University Of Illinois System for executing 3d propagation for depth image-based rendering
KR20120026662A (en) * 2010-09-10 2012-03-20 삼성전자주식회사 Apparatus and method for inpainting in occlusion
JP2012109788A (en) * 2010-11-17 2012-06-07 Konica Minolta Holdings Inc Image processing device and parallax information generation device
CN102878982A (en) * 2011-07-11 2013-01-16 北京新岸线移动多媒体技术有限公司 Method for acquiring three-dimensional scene information and system thereof
US20130315473A1 (en) * 2011-02-24 2013-11-28 Sony Corporation Image processing device and image processing method
JP2014072809A (en) * 2012-09-28 2014-04-21 Dainippon Printing Co Ltd Image generation apparatus, image generation method, and program for the image generation apparatus
US20150092845A1 (en) * 2012-05-08 2015-04-02 S.I.Sv.El Societa' Italiana Per Lo Sviluppo Dell' Elettronica S.P.A. Method for generating and reconstructing a three-dimensional video stream, based on the use of the occlusion map, and corresponding generating and reconstructing device
US20150172633A1 (en) * 2013-12-13 2015-06-18 Panasonic Intellectual Property Management Co., Ltd. Image capturing apparatus, monitoring system, image processing apparatus, image capturing method, and non-transitory computer readable recording medium
CN105191287A (en) * 2013-03-08 2015-12-23 吉恩-鲁克·埃法蒂卡迪 Method of replacing objects in a video stream and computer program
US20160140700A1 (en) * 2014-11-18 2016-05-19 Sung Hee Park Method and apparatus for filling images captured by array cameras
CN106960454A (en) * 2017-03-02 2017-07-18 武汉星巡智能科技有限公司 Depth of field barrier-avoiding method, equipment and unmanned vehicle
CN107016698A (en) * 2017-03-20 2017-08-04 深圳格兰泰克汽车电子有限公司 Based on tapered plane smooth binocular solid matching process and device
CN107077741A (en) * 2016-11-11 2017-08-18 深圳市大疆创新科技有限公司 Depth drawing generating method and the unmanned plane based on this method
CN107194963A (en) * 2017-04-28 2017-09-22 努比亚技术有限公司 A kind of dual camera image processing method and terminal
CN107534764A (en) * 2015-04-30 2018-01-02 深圳市大疆创新科技有限公司 Strengthen the system and method for image resolution ratio

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7697749B2 (en) * 2004-08-09 2010-04-13 Fuji Jukogyo Kabushiki Kaisha Stereo image processing device
CN101321299B (en) * 2007-06-04 2011-06-01 华为技术有限公司 Parallax generation method, generation cell and three-dimensional video generation method and device
JP6040838B2 (en) * 2012-08-29 2016-12-07 株式会社Jvcケンウッド Depth estimation apparatus, depth estimation method, depth estimation program, image processing apparatus, image processing method, and image processing program
CN103295229B (en) * 2013-05-13 2016-01-20 清华大学深圳研究生院 The overall solid matching method of video depth Information recovering
CN103796004B (en) * 2014-02-13 2015-09-30 西安交通大学 A kind of binocular depth cognitive method of initiating structure light
CN107360354B (en) * 2017-07-31 2020-06-26 Oppo广东移动通信有限公司 Photographing method, photographing device, mobile terminal and computer-readable storage medium

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6865289B1 (en) * 2000-02-07 2005-03-08 Canon Kabushiki Kaisha Detection and removal of image occlusion errors
US20050286756A1 (en) * 2004-06-25 2005-12-29 Stmicroelectronics, Inc. Segment based image matching method and system
US20100026712A1 (en) * 2008-07-31 2010-02-04 Stmicroelectronics S.R.L. Method and system for video rendering, computer program product therefor
US20100091092A1 (en) * 2008-10-10 2010-04-15 Samsung Electronics Co., Ltd. Image processing apparatus and method
JP2011060216A (en) * 2009-09-14 2011-03-24 Fujifilm Corp Device and method of processing image
US20110115886A1 (en) * 2009-11-18 2011-05-19 The Board Of Trustees Of The University Of Illinois System for executing 3d propagation for depth image-based rendering
KR20120026662A (en) * 2010-09-10 2012-03-20 삼성전자주식회사 Apparatus and method for inpainting in occlusion
JP2012109788A (en) * 2010-11-17 2012-06-07 Konica Minolta Holdings Inc Image processing device and parallax information generation device
US20130315473A1 (en) * 2011-02-24 2013-11-28 Sony Corporation Image processing device and image processing method
CN102878982A (en) * 2011-07-11 2013-01-16 北京新岸线移动多媒体技术有限公司 Method for acquiring three-dimensional scene information and system thereof
US20150092845A1 (en) * 2012-05-08 2015-04-02 S.I.Sv.El Societa' Italiana Per Lo Sviluppo Dell' Elettronica S.P.A. Method for generating and reconstructing a three-dimensional video stream, based on the use of the occlusion map, and corresponding generating and reconstructing device
JP2014072809A (en) * 2012-09-28 2014-04-21 Dainippon Printing Co Ltd Image generation apparatus, image generation method, and program for the image generation apparatus
CN105191287A (en) * 2013-03-08 2015-12-23 吉恩-鲁克·埃法蒂卡迪 Method of replacing objects in a video stream and computer program
US20150172633A1 (en) * 2013-12-13 2015-06-18 Panasonic Intellectual Property Management Co., Ltd. Image capturing apparatus, monitoring system, image processing apparatus, image capturing method, and non-transitory computer readable recording medium
US20160140700A1 (en) * 2014-11-18 2016-05-19 Sung Hee Park Method and apparatus for filling images captured by array cameras
CN107534764A (en) * 2015-04-30 2018-01-02 深圳市大疆创新科技有限公司 Strengthen the system and method for image resolution ratio
CN107077741A (en) * 2016-11-11 2017-08-18 深圳市大疆创新科技有限公司 Depth drawing generating method and the unmanned plane based on this method
CN106960454A (en) * 2017-03-02 2017-07-18 武汉星巡智能科技有限公司 Depth of field barrier-avoiding method, equipment and unmanned vehicle
CN107016698A (en) * 2017-03-20 2017-08-04 深圳格兰泰克汽车电子有限公司 Based on tapered plane smooth binocular solid matching process and device
CN107194963A (en) * 2017-04-28 2017-09-22 努比亚技术有限公司 A kind of dual camera image processing method and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊伟等: "自适应成本量的抗遮挡光场深度估计算法", 《中国图象图形学报》 *

Also Published As

Publication number Publication date
WO2019153196A1 (en) 2019-08-15

Similar Documents

Publication Publication Date Title
CN108961327B (en) Monocular depth estimation method and device, equipment and storage medium thereof
CN111656136B (en) Vehicle positioning system using lidar
CN108955718B (en) Visual odometer and positioning method thereof, robot and storage medium
EP3517997B1 (en) Method and system for detecting obstacles by autonomous vehicles in real-time
US10789719B2 (en) Method and apparatus for detection of false alarm obstacle
US10636151B2 (en) Method for estimating the speed of movement of a camera
US10762643B2 (en) Method for evaluating image data of a vehicle camera
EP3236424B1 (en) Information processing apparatus and method of controlling the same
US11082633B2 (en) Method of estimating the speed of displacement of a camera
US7764284B2 (en) Method and system for detecting and evaluating 3D changes from images and a 3D reference model
CN111433818A (en) Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
CN109961417B (en) Image processing method, image processing apparatus, and mobile apparatus control method
WO2018149539A1 (en) A method and apparatus for estimating a range of a moving object
CN113223064B (en) Visual inertial odometer scale estimation method and device
Schramm et al. Data fusion for 3D thermal imaging using depth and stereo camera for robust self-localization
CN116503566B (en) Three-dimensional modeling method and device, electronic equipment and storage medium
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium
CN115511970B (en) Visual positioning method for autonomous parking
CN115718304A (en) Target object detection method, target object detection device, vehicle and storage medium
CN111260538A (en) Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
CN110326028A (en) Method, apparatus, computer system and the movable equipment of image procossing
CN111986248A (en) Multi-view visual perception method and device and automatic driving automobile
CN114612875A (en) Target detection method, target detection device, storage medium and electronic equipment
CN113409268B (en) Method and device for detecting passable area based on monocular camera and storage medium
CN116612459B (en) Target detection method, target detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191011