US11379963B2 - Information processing method and device, cloud-based processing device, and computer program product - Google Patents
Information processing method and device, cloud-based processing device, and computer program product Download PDFInfo
- Publication number
- US11379963B2 US11379963B2 US16/609,447 US201816609447A US11379963B2 US 11379963 B2 US11379963 B2 US 11379963B2 US 201816609447 A US201816609447 A US 201816609447A US 11379963 B2 US11379963 B2 US 11379963B2
- Authority
- US
- United States
- Prior art keywords
- depth image
- depression region
- row
- suspected
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 40
- 238000012545 processing Methods 0.000 title claims abstract description 34
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000004590 computer program Methods 0.000 title claims abstract description 10
- 238000000034 method Methods 0.000 claims description 46
- 230000008569 process Effects 0.000 claims description 19
- 238000001914 filtration Methods 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 10
- 238000010586 diagram Methods 0.000 description 14
- 238000001514 detection method Methods 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 230000001771 impaired effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
Definitions
- the present disclosure relates to the field of data processing technologies, and specifically to an information processing method and device, a cloud-based processing device, and a computer program product.
- Computer vision is a branch of science that studies how to make a machine “see”. To be more specific, it is machine vision utilizing a device to substitute human eyes to recognize, track, and measure a target. Then image processing is further performed, using a processor to process the data thus obtained into images that are more suitable for human eyes to observe, or are transmitted to an instrument for detection.
- machine vision can be applied in many scenarios.
- machine vision can be applied to a guiding stick, which is used to avoid obstacles in front of a visually impaired person.
- machine vision can be applied in the field of navigation, which is used to detect a road and obstacles on the surface of the road.
- Embodiments of the present disclosure provide an information processing method, device, cloud-based processing device, and computer program product, which relate to the field of data processing technologies and can cause an increased efficiency in detecting whether a road area contains a depression region.
- embodiments of the present application provide an information processing method, which includes:
- embodiments of the present application further provide an information processing device, which includes:
- an acquisition unit configured to acquire a depth image
- a processing unit configured to process the depth image to obtain a means-by-row graph, and then to determine a road area in the depth image based on the means-by-row graph;
- a determination unit configured to determine a suspected depression region in the road area
- a judgment unit configured to judge over the suspected depression region based on the depression threshold to thereby determine whether the depth image contains a depression region.
- embodiments of the present application further provide a cloud processing device, which includes a processor and a memory.
- the memory is configured to store instructions which, when executed by the processor, cause the device to execute the method according to any one of embodiments provided in the first aspect of the present disclosure.
- embodiments of the present application further provide a
- a depth image is acquired and processed. Firstly, a road area in the depth image can be determined according to row means of the depth image; then a suspected depression region in the road area can be determined; and finally, the suspected depression region can be judged over based on a depression threshold to thereby determine whether the depth image contains a depression region.
- the technical solutions provided by the embodiments of this application can effectively judge whether there is a depression region on a road surface. The detection efficiency is high and the calculation speed is fast. It can solve the problems of low accuracy in detecting depressions or objects below the horizontal line associated with the existing technologies.
- FIG. 1 is a flow chart of an information processing method provided by some embodiments of the disclosure.
- FIG. 2 is a diagram of a first scene using the information processing method provided by some embodiment of the disclosure.
- FIG. 3 is a schematic diagram of a world coordinate system provided by some embodiments of this disclosure.
- FIG. 4 is a diagram of a second scene using the information processing method provided by some embodiment of the disclosure.
- FIG. 5 illustrates a flow chart of an information processing method provided by another embodiment of the disclosure
- FIG. 6 illustrates a flow chart of an information processing method provided by yet another embodiment of the disclosure
- FIG. 7 is a schematic diagram illustrating a structure of an information processing device according to some embodiment of the disclosure.
- FIG. 8 is a schematic diagram illustrating a structure of an information processing device according to another embodiment of the disclosure.
- FIG. 9 is a schematic diagram illustrating a structure of an information processing device according to yet another embodiment of the disclosure.
- FIG. 10 is a schematic diagram of the cloud-based processing device provided by some embodiments of this application.
- the phrase “if . . . ” as used in the disclosure can be interpreted as “in situation where . . . ”, “when . . . ”, or “in response to the determination that . . . ”, or “upon detecting . . . ”.
- the phrase “if determining . . . ” or “if detecting . . . (condition or event under statement)” can be interpreted as “when determining . . . ” or “in response to the determination that . . . ” or “when detecting . . . (condition or event under statement)” or “in response to detection that . . . (condition or event of statement)”.
- machine vision can be applied in many scenarios. For example, machine vision can be applied to a guiding stick, or can be applied to the field of navigation, whereas in detecting a road surface, it is most commonly used for the road surface detection or obstacle detection.
- machine vision can be applied to a guiding stick, or can be applied to the field of navigation, whereas in detecting a road surface, it is most commonly used for the road surface detection or obstacle detection.
- a seed point region growing method a random point least-square method, a mean block height method, a V-disparity algorithm, etc.
- there are issues or problems such as complex calculations, and vulnerability to impacts from samples and actual environments such that the accuracy of results is influenced, the recognition efficiency is low, and the detection range is limited.
- embodiments of the present disclosure provide an information processing method, which utilizes the depth images that are obtained or acquired to detect whether there are depressions on the road surface.
- FIG. 1 illustrates a flow chart of an information processing method provided by some embodiments of the disclosure. As shown in FIG. 1 , the information processing method includes the following steps:
- the depth image can be obtained or acquired by means of a depth sensor, which takes photographs of an object in a real-time manner, as shown in FIG. 2 , which illustrates a schematic diagram of a first scene using the information processing method provided by some embodiments of the disclosure.
- the depth image may also have already been taken and then be acquired.
- a user can upload a depth image to a processing device.
- a specified depth image can be acquired in a depth image library.
- a depth sensor i.e. a depth camera
- a depth sensor can generally include three types: a three-dimensional sensor based on structured lights, such as Kinect, RealSense, LeapMotion, Orbbec, etc., a three-dimensional sensor based on binocular stereo vision, such as ZED, Inuitive, Human+Director, etc., or a depth sensor based on the TOF principle, such as PMD, Panasonic, etc.
- the depth image can be acquired for subsequent detection to determine whether the current image contains a depression region.
- the depression region can exist on a road surface, and that in practical applications, it is not limited to the road surface, and can also be in other scenarios, such as in an indoor situation.
- FIG. 3 is a schematic diagram of a world coordinate system provided by some embodiments of this disclosure.
- an optical center of the depth sensor is used as an origin of the world coordinate system
- a horizontally rightward direction is chosen as a positive direction of an X axis
- a vertically downward direction is chosen as a positive direction of a Y axis
- a forward direction that is perpendicular to the plane is chosen as a positive direction of a Z axis, such that a world coordinate system is established.
- a point P(X c , Y c , Z c ) in the depth sensor coordinate system can be converted to a point P(X w , Y w , Z w ) in the world coordinate system.
- the calculation formulas are as follows:
- u, v are the coordinate values of the point P in the pixel coordinate system
- X c , Y c and Z c are the coordinate values of the point P in the camera coordinate system
- X w is the X-axis coordinate value of each pixel in the image in the world coordinate system
- Y w is the Y-axis coordinate value of each pixel in the image in the world coordinate system
- Z w is the Z-axis coordinate value of each pixel in the image in the world coordinate system
- ⁇ , ⁇ and ⁇ describe the attitude angle of the depth sensor, respectively representing a rotation angle of the X, Y and Z axes of the depth sensor around the X, Y and Z axes of the world coordinate system.
- X c is the X-axis coordinate value of each pixel in the depth sensor coordinate system of the image
- Y c is the Y-axis coordinate value of each pixel in the depth sensor coordinate system of the image
- Z c is the Z-axis coordinate value of each pixel in the depth sensor coordinate system of the image
- M 3 ⁇ 4 is the camera's internal reference matrix.
- an image comprising Z w is the depth image in the world coordinate system.
- the depth image in the world coordinate system is processed, and a mean value of each row is calculated to thereby obtain a means-by-row graph.
- the depth image in the world coordinate system can be preprocessed.
- the preprocessing may include smoothing, filtering, denoising, and so on.
- a mean value of pixels in each row of pixels in the depth image can be calculated, and then based on a number of each row and a mean value corresponding to the each row, a means-by-row graph I rowsMean can be established.
- the means-by-row graph is processed to determine a suspected road area.
- the road surface has certain characteristics, regarding the Z w of the world coordinate system, the bottom-to-top direction thereof usually represents a near-to-far road surface, which has the characteristics of being monotonously increasing.
- the row mean values that are not monotonously increasing in the bottom-to-top direction can be first removed, and the remaining row mean values can be next filtered by lone points, and the micro-fault zones can then be connected to thereby obtain a preprocessed result. After the preprocessed result is obtained, the suspected road area in the depth image can be filtered according to the preprocessed result.
- a row in which the median of a column vector for the row mean is 0 can be set as 0. Then each pixel position, if a difference between a depth value of the each pixel in the depth image and a corresponding value of the column vector for a row mean is greater than or equal to a preset level of tolerance for road undulation, is set as 0; and each pixel position in the depth image having values of not zero is determined as the suspected road area.
- the suspected road area is judged based on a preset position threshold of the main plane to thereby determine the road area in the depth image.
- a selection strategy can be set in advance. For example, an area with a largest area and with a distance from the lowest position of the suspected road area to the lowest position of the depth map Z w not exceeding ⁇ rows can be selected. Specifically, it can be set that: ⁇ rows ⁇ 5% ⁇ H Zw ;
- ⁇ rows represents a threshold value for the position of the main plane
- H Zw represents a height of the depth image Z w .
- the process of determining a suspected depression region in a road area can be as follows:
- a mean value for each row in the road area is calculated. Because there are some error factors in the road area, the road area can be preprocessed in advance. In a specific implementation process, the preprocessing can include smoothing, filtering, denoising and other processing. Next, the mean value for each row of the preprocessed road area can be calculated.
- the specific calculation method can be referenced to the description as mentioned above.
- the formula of the band-stop filter is as follows:
- Z wGnd ⁇ ( i , j ) ⁇ 0 , ⁇ Z wGnd ⁇ ( i , j ) - I rowsMeanGnd ⁇ ( i ) ⁇ ⁇ ⁇ Z w ⁇ ⁇ Gnd ⁇ ( i , j ) , ⁇ Z wGnd ⁇ ( i , j ) - I rowsMeanGnd ⁇ ( i ) ⁇ > ⁇
- Z wGna (i, j) is the depth value of the depth image corresponding to the road area at the coordinates (i, j), and I rowsMeanGnd (i) is the mean value of the depth image corresponding to the road area at the row i; and ⁇ is the preset level of tolerance for depressions on the road surface.
- the setting of the value of ⁇ is related to the depth sensor used and to the actual road condition. If the value is set as too small, there will be relatively more false positives. If the value is set as too large, there will be relatively more false negatives, which is not beneficial to subsequent processing. Therefore, in combination with a large number of experimental data and empirical values, the range of ⁇ is usually between [5, 30].
- the row means can be filtered using a band-stop filter to obtain a suspected depression region as shown in FIG. 4 , which illustrates a diagram of a second scene using the information processing method provided by some embodiment of the disclosure.
- the row means After the row means have been filtered with the above formulas, the set of Z wGnd (i, j) thus obtained is the suspected depression region.
- the suspected depression region is preprocessed.
- the preprocessing treatments such as binarization and morphological processing, etc., can be performed over the suspected depression region to thereby remove the influence of burrs and islands on the subsequent extraction of depression edges.
- the contour of the suspected depression region C pothole is extracted, and the contour is used as a candidate depression region.
- the area of the candidate depression region is calculated.
- the area of the candidate depression region is set as S pothole .
- the Xw values: XwR, XwL, that correspond respectively to the right-most value and the left-most value of the candidate depression region, and the Zw values: ZwT, ZwB, that correspond respectively to the upper-most (i.e. top) value and the lower-most (i.e. bottom) value of the candidate depression region can be utilized, such that an area of a rectangular box comprising XwR, XwL, ZwT, and ZwB can be used for substitution.
- the area threshold is set as ⁇ , then if S pothole > ⁇ , the candidate depression region is determined to be a depression region, and the depth image acquired at the moment by the depth sensor contains a depression region.
- the setting of the value of E is related to the depth sensor that is used and the actual road condition. If the value is too small, there will be relatively more false positives. If the value is too large, there will be relatively more false negatives. Therefore, in combination with a large number of experimental data and empirical values, the value range can usually be between [100, 400].
- the information processing method provided in the embodiments of this disclosure processes the acquired depth image. Firstly, a road area in a depth image can be determined based on the row means of the depth image; then, a suspected depression region in the road area can be determined; finally, the suspected depression region can be judged based on a depression threshold to determine whether the depth image contains a depression region.
- the technical solutions provided by the embodiments of this application can effectively judge whether there is a depression region on a road surface. The detection efficiency is high and the calculation speed is fast. It can solve the problems of low accuracy in detecting depressions or objects below the horizontal line associated with the existing technologies.
- FIG. 5 shows another flow chart of an information processing method provided by some embodiments of the disclosure. As shown in FIG. 5 , the embodiments of the information processing method can further include the following step:
- the area of the candidate depression region is set as S pothole , and the area threshold is set as E. Then if S pothole ⁇ , the candidate depression region is determined as a non-depression region, and the candidate depression region can be deleted.
- FIG. 6 shows yet another flow chart of an information processing method provided by some embodiments of the disclosure. As shown in FIG. 6 , the information processing method further includes the following step:
- a detection module of the product can feed or transmit parameters to a corresponding prompting module, so that the prompting module can output a prompt message.
- the prompt message can include voice information, vibration information, text information, sound information, optical light information, etc.
- FIG. 7 is a schematic diagram illustrating a structure of an information processing device according to some embodiment of the disclosure. As shown in FIG. 7 , the embodiments of the device include: an acquisition unit 11 , a processing unit 12 , a determination unit 13 and a judgment unit 14 .
- the acquisition unit 11 is configured to acquire a depth image.
- the processing unit 12 is configured to process the depth image to obtain a means-by-row graph, and then to determine a road area in the depth image based on the means-by-row graph.
- the determination unit 13 is configured to determine a suspected depression region in the road area.
- the judgment unit 14 is configured to judge over the suspected depression region based on the depression threshold to thereby determine whether the depth image contains a depression region.
- the depth image can be an image under a camera/sensor coordinate system.
- the processing unit 12 can be specifically configured:
- the determination unit 13 is configured:
- the judgment unit 14 is specifically configured:
- the information processing device provided in the embodiments of this application can be used to implement the technical scheme of the information processing method as shown in FIG. 1 . Because the implementation principle and technical effects are similar, the description thereof is not repeated herein.
- FIG. 8 illustrates a schematic diagram of a structure of the information processing device according to some other embodiments of the disclosure. As shown in FIG. 8 , the embodiments of the information processing device further include a deletion unit 15 .
- the deletion unit 15 is configured to delete the candidate depression region if the area of the candidate depression region is less than or equal to the area threshold.
- the information processing device provided in this embodiments of the application disclosed herein can be used to implement the technical scheme of the embodiment of the method shown in FIG. 5 . Because the implementation principle and technical effects are similar, the description thereof is not repeated herein.
- FIG. 9 shows a schematic diagram of a structure of the information processing device according to yet another embodiment of the present application. As shown in FIG. 9 , the embodiment of the device further includes an output unit 16 .
- the output unit is configured to output a prompt message upon determining that the depth image contains a depression region.
- the information processing device provided in the embodiment of the present application can be used to implement the technical scheme of the embodiment of the method shown in FIG. 6 . Because the implementation principle and technical effects are similar, the description thereof is not repeated herein.
- FIG. 10 is a schematic diagram of the cloud-based processing device provided by some embodiments of this application.
- the cloud processing device includes a processor 21 and a memory 22 .
- the memory 22 is configured to store instructions. When the instructions are executed by the processor 21 , the device can execute any of the embodiments of the method as described above.
- the cloud processing device provided in the embodiments of the present application can be used to implement the technical schemes of the method embodiments shown in any of FIGS. 1-6 . Because the implementation principle and technical effects are similar, description thereof is not repeated herein.
- embodiments of the present application also provide a computer program product which can be directly loaded into an internal memory of a computer and contains software codes. After the computer program is loaded and executed, the computer program can realize any of the embodiments of the method as described above.
- the cloud processing device provided in the embodiment of the present application can be used to implement the technical scheme of the method embodiments shown in any of FIGS. 1-6 . Because its implementation principle and technical effects are similar, description thereof is not repeated herein.
- the system, device and method disclosed may be implemented in other ways.
- the embodiments of the device described above are merely illustrative.
- the division of the units described above is only a logical functional division, and in actual practice, there may be other ways of division. For instance, multiple units or components can be combined or integrated into a system; some features can be ignored or not implemented.
- the coupling, direct coupling, or communicative connection shown or discussed above may be through some interfaces, or an indirect coupling between devices or units, which may be in an electrical, a mechanical, or other forms.
- separation components may or may not be physically separated, and the components displayed as a unit may or may not be a physical unit. That is, it may be located in one place or may be distributed over multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the present embodiments.
- the functional units in the various embodiments of the present application may be integrated in one processing unit, or may be physically present as separate units, or may be integrated in one unit by two or more units.
- the above integrated units can be implemented either in the form of hardware or in the form of hardware plus software functional units.
- the integrated unit realized in the form of software functional unit can be stored in a computer readable storage medium.
- the above software functional unit can be stored in a storage medium, including instructions for a computer device (i.e. a personal computer, a server, a network device, etc.) or a processor to perform some steps of the method described in the various embodiments of the present application.
- the aforementioned storage medium can include: a U disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a disk or a CD, or another medium that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
εrows<5%·H Zw;
Claims (18)
εrows<5%·H Zw;
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/072132 WO2019136641A1 (en) | 2018-01-10 | 2018-01-10 | Information processing method and apparatus, cloud processing device and computer program product |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200090323A1 US20200090323A1 (en) | 2020-03-19 |
US11379963B2 true US11379963B2 (en) | 2022-07-05 |
Family
ID=62657689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/609,447 Active 2038-12-13 US11379963B2 (en) | 2018-01-10 | 2018-01-10 | Information processing method and device, cloud-based processing device, and computer program product |
Country Status (5)
Country | Link |
---|---|
US (1) | US11379963B2 (en) |
EP (1) | EP3605460A4 (en) |
JP (1) | JP6955783B2 (en) |
CN (1) | CN108235774B (en) |
WO (1) | WO2019136641A1 (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108885791B (en) * | 2018-07-06 | 2022-04-08 | 达闼机器人有限公司 | Ground detection method, related device and computer readable storage medium |
CN109074490B (en) * | 2018-07-06 | 2023-01-31 | 达闼机器人股份有限公司 | Path detection method, related device and computer readable storage medium |
CN110852312B (en) * | 2020-01-14 | 2020-07-17 | 深圳飞科机器人有限公司 | Cliff detection method, mobile robot control method, and mobile robot |
CN111274939B (en) * | 2020-01-19 | 2023-07-14 | 交信北斗科技有限公司 | Automatic extraction method for road pavement pothole damage based on monocular camera |
CN112070700B (en) * | 2020-09-07 | 2024-03-29 | 深圳市凌云视迅科技有限责任公司 | Method and device for removing protrusion interference noise in depth image |
CN112099504B (en) * | 2020-09-16 | 2024-06-18 | 深圳优地科技有限公司 | Robot moving method, device, equipment and storage medium |
CN112435297B (en) * | 2020-12-02 | 2023-04-18 | 达闼机器人股份有限公司 | Target object pose determining method and device, storage medium and electronic equipment |
CN115393813B (en) * | 2022-08-18 | 2023-05-02 | 中国人民公安大学 | Road identification method, device, equipment and storage medium based on remote sensing image |
CN115760805B (en) * | 2022-11-24 | 2024-02-09 | 中山大学 | Positioning method for processing element surface depression based on visual touch sense |
CN116820125B (en) * | 2023-06-07 | 2023-12-22 | 哈尔滨市大地勘察测绘有限公司 | Unmanned seeder control method and system based on image processing |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130141578A1 (en) * | 2011-11-20 | 2013-06-06 | Magna Electronics, Inc. | Vehicle vision system with enhanced functionality |
JP2014106897A (en) | 2012-11-29 | 2014-06-09 | Toyota Motor Corp | Passage propriety determination device |
CN104200453A (en) | 2014-09-15 | 2014-12-10 | 西安电子科技大学 | Parallax image correcting method based on image segmentation and credibility |
CN104463145A (en) | 2014-12-23 | 2015-03-25 | 上海斐讯数据通信技术有限公司 | Electronic equipment and obstacle reminding method |
CN104899869A (en) | 2015-05-14 | 2015-09-09 | 浙江大学 | Plane and barrier detection method based on RGB-D camera and attitude sensor |
CN106597690A (en) | 2016-11-23 | 2017-04-26 | 杭州视氪科技有限公司 | Visually impaired people passage prediction glasses based on RGB-D camera and stereophonic sound |
CN106843491A (en) | 2017-02-04 | 2017-06-13 | 上海肇观电子科技有限公司 | Smart machine and electronic equipment with augmented reality |
JP2017138238A (en) | 2016-02-04 | 2017-08-10 | 株式会社トプコン | Display method for road properties, and display apparatus for road properties |
CN206460410U (en) | 2017-02-04 | 2017-09-01 | 上海肇观电子科技有限公司 | Smart machine with augmented reality |
CN107341789A (en) | 2016-11-23 | 2017-11-10 | 杭州视氪科技有限公司 | One kind is based on RGB D cameras and stereosonic visually impaired people's path precognition system and method |
US20180182109A1 (en) * | 2016-12-22 | 2018-06-28 | TCL Research America Inc. | System and method for enhancing target tracking via detector and tracker fusion for unmanned aerial vehicles |
US20190187704A1 (en) * | 2017-12-20 | 2019-06-20 | International Business Machines Corporation | Self-driving vehicle passenger management |
-
2018
- 2018-01-10 EP EP18899638.3A patent/EP3605460A4/en not_active Withdrawn
- 2018-01-10 JP JP2019559815A patent/JP6955783B2/en active Active
- 2018-01-10 US US16/609,447 patent/US11379963B2/en active Active
- 2018-01-10 WO PCT/CN2018/072132 patent/WO2019136641A1/en unknown
- 2018-01-10 CN CN201880000099.1A patent/CN108235774B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130141578A1 (en) * | 2011-11-20 | 2013-06-06 | Magna Electronics, Inc. | Vehicle vision system with enhanced functionality |
JP2014106897A (en) | 2012-11-29 | 2014-06-09 | Toyota Motor Corp | Passage propriety determination device |
CN104200453A (en) | 2014-09-15 | 2014-12-10 | 西安电子科技大学 | Parallax image correcting method based on image segmentation and credibility |
CN104463145A (en) | 2014-12-23 | 2015-03-25 | 上海斐讯数据通信技术有限公司 | Electronic equipment and obstacle reminding method |
CN104899869A (en) | 2015-05-14 | 2015-09-09 | 浙江大学 | Plane and barrier detection method based on RGB-D camera and attitude sensor |
JP2017138238A (en) | 2016-02-04 | 2017-08-10 | 株式会社トプコン | Display method for road properties, and display apparatus for road properties |
CN106597690A (en) | 2016-11-23 | 2017-04-26 | 杭州视氪科技有限公司 | Visually impaired people passage prediction glasses based on RGB-D camera and stereophonic sound |
CN107341789A (en) | 2016-11-23 | 2017-11-10 | 杭州视氪科技有限公司 | One kind is based on RGB D cameras and stereosonic visually impaired people's path precognition system and method |
US20180182109A1 (en) * | 2016-12-22 | 2018-06-28 | TCL Research America Inc. | System and method for enhancing target tracking via detector and tracker fusion for unmanned aerial vehicles |
CN106843491A (en) | 2017-02-04 | 2017-06-13 | 上海肇观电子科技有限公司 | Smart machine and electronic equipment with augmented reality |
CN206460410U (en) | 2017-02-04 | 2017-09-01 | 上海肇观电子科技有限公司 | Smart machine with augmented reality |
US20190187704A1 (en) * | 2017-12-20 | 2019-06-20 | International Business Machines Corporation | Self-driving vehicle passenger management |
Non-Patent Citations (12)
Title |
---|
English Translation of the Written Opinion of the International Search Authority in the international application No. PCT/CN2018/072132, dated Oct. 18, 2018. |
First Office Action of the Chinese application No. 201880000099.1, dated Sep. 17, 2019. |
First Office Action of the Japanese application No. 2019-559815, dated Nov. 25, 2020. |
International Search Report in the international application No. PCT/CN2018/072132, dated Oct. 18, 2018. |
Li, Wei, et al. "Three-dimensional pavement crack detection algorithm based on two-dimensional empirical mode decomposition." Journal of Transportation Engineering, Part B: Pavements 143.2 (2017): 04017005. (Year: 2017). * |
Lokeshwor Huidrom et al: "Method for Automated Assessment of Potholes, Cracks and Patches from Road Surface Video Clips", PROCEDIA—Social and Behavioral Sciences, vol. 104, Dec. 2, 2013 (Dec. 2, 2013), pp. 312-321, XP55694022, ISSN: 1877-0428, DOI: 10.1016/j. sbspro.2013.11.124 section 2.2; p. 315. |
LOKESHWOR HUIDROM, DAS LALIT KUMAR, SUD S.K.: "Method for Automated Assessment of Potholes, Cracks and Patches from Road Surface Video Clips", PROCEDIA - SOCIAL AND BEHAVIORAL SCIENCES, vol. 104, 2 December 2013 (2013-12-02), pages 312 - 321, XP055694022, ISSN: 1877-0428, DOI: 10.1016/j.sbspro.2013.11.124 |
Moazzam, Imran, et al. "Metrology and visualization of potholes using the microsoft kinect sensor." 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013). IEEE, 2013. (Year: 2013). * |
Mohammad R. Jahanshahi et al: "Unsupervised Approach for Autonomous Pavement-Defect Detection and Quantification Using an Inexpensive Depth Sensor", Journal of Computing in Civil Engineering, vol. 27, No. 6, Nov. 1, 2013 (Nov. 1, 2013), pp. 743-754, XP055693452, US ISSN: 0887-3801, DOI: 10.1061/(ASCE) CP. 1943-5487.0000245 section "Defect Detection"; figures 2,3,7. |
MOHAMMAD R. JAHANSHAHI, JAZIZADEH FARROKH, MASRI SAMI F., BECERIK-GERBER BURCIN: "Unsupervised Approach for Autonomous Pavement-Defect Detection and Quantification Using an Inexpensive Depth Sensor", JOURNAL OF COMPUTING IN CIVIL ENGINEERING, AMERICAN SOCIETY OF CIVIL ENGINEERS, NEW YORK, NY, US, vol. 27, no. 6, 1 November 2013 (2013-11-01), US , pages 743 - 754, XP055693452, ISSN: 0887-3801, DOI: 10.1061/(ASCE)CP.1943-5487.0000245 |
Ryu, Seung-Ki, Taehyeong Kim, and Young-Ro Kim. "Feature-based pothole detection in two-dimensional images." Transportation Research Record 2528.1 (2015): 9-17. (Year: 2015). * |
Supplementary European Search Report in the European application No. 18899638.3, dated May 20, 2020. |
Also Published As
Publication number | Publication date |
---|---|
JP6955783B2 (en) | 2021-10-27 |
EP3605460A1 (en) | 2020-02-05 |
EP3605460A4 (en) | 2020-06-17 |
CN108235774B (en) | 2020-07-14 |
CN108235774A (en) | 2018-06-29 |
US20200090323A1 (en) | 2020-03-19 |
WO2019136641A1 (en) | 2019-07-18 |
JP2020518918A (en) | 2020-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11379963B2 (en) | Information processing method and device, cloud-based processing device, and computer program product | |
CN109271944B (en) | Obstacle detection method, obstacle detection device, electronic apparatus, vehicle, and storage medium | |
US11643076B2 (en) | Forward collision control method and apparatus, electronic device, program, and medium | |
CN112967283B (en) | Target identification method, system, equipment and storage medium based on binocular camera | |
CN108520536B (en) | Disparity map generation method and device and terminal | |
CN108280401B (en) | Pavement detection method and device, cloud server and computer program product | |
CN112233221B (en) | Three-dimensional map reconstruction system and method based on instant positioning and map construction | |
US20150377607A1 (en) | Sensor system for determining distance information based on stereoscopic images | |
CN112097732A (en) | Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium | |
CN111178150A (en) | Lane line detection method, system and storage medium | |
KR20110058262A (en) | Apparatus and method for extracting vehicle | |
WO2021017211A1 (en) | Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal | |
CN115496923B (en) | Multi-mode fusion target detection method and device based on uncertainty perception | |
CN115410167A (en) | Target detection and semantic segmentation method, device, equipment and storage medium | |
CN115861601A (en) | Multi-sensor fusion sensing method and device | |
CN110197104B (en) | Distance measurement method and device based on vehicle | |
CN112529011A (en) | Target detection method and related device | |
CN116403191A (en) | Three-dimensional vehicle tracking method and device based on monocular vision and electronic equipment | |
KR102188164B1 (en) | Method of Road Recognition using 3D Data | |
CN116052120A (en) | Excavator night object detection method based on image enhancement and multi-sensor fusion | |
CN112364693B (en) | Binocular vision-based obstacle recognition method, device, equipment and storage medium | |
CN113011212B (en) | Image recognition method and device and vehicle | |
Deb et al. | A novel approach of assisting the visually impaired to navigate path and avoiding obstacle-collisions | |
CN117372988B (en) | Road boundary detection method, device, electronic equipment and storage medium | |
CN114299131A (en) | Three-camera-based short and small obstacle detection method and device and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: CLOUDMINDS (SHENZHEN) ROBOTICS SYSTEMS CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, YE;LIAN, SHIGUO;REEL/FRAME:051430/0264 Effective date: 20191014 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: DATHA ROBOT CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CLOUDMINDS (SHENZHEN) ROBOTICS SYSTEMS CO., LTD.;REEL/FRAME:055613/0424 Effective date: 20210311 |
|
AS | Assignment |
Owner name: CLOUDMINDS (SHANGHAI) ROBOTICS CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DATHA ROBOT CO., LTD.;REEL/FRAME:055973/0581 Effective date: 20210407 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
AS | Assignment |
Owner name: CLOUDMINDS ROBOTICS CO., LTD, CHINA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE'S NAME AND ADDRESS PREVIOUSLY RECORDED AT REEL: 055973 FRAME: 0581. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:DATHA ROBOT CO., LTD.;REEL/FRAME:060384/0843 Effective date: 20210407 |
|
AS | Assignment |
Owner name: CLOUDMINDS ROBOTICS CO., LTD., CHINA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE'S NAME AND ADDRESS PREVIOUSLY RECORDED AT REEL: 055973 FRAME: 0581. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:DATHA ROBOT CO., LTD.;REEL/FRAME:060173/0560 Effective date: 20210407 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |