WO2021128834A1 - Procédé et appareil de navigation fondés sur la vision artificielle, dispositif informatique et support - Google Patents

Procédé et appareil de navigation fondés sur la vision artificielle, dispositif informatique et support Download PDF

Info

Publication number
WO2021128834A1
WO2021128834A1 PCT/CN2020/105015 CN2020105015W WO2021128834A1 WO 2021128834 A1 WO2021128834 A1 WO 2021128834A1 CN 2020105015 W CN2020105015 W CN 2020105015W WO 2021128834 A1 WO2021128834 A1 WO 2021128834A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
obstacle
target
recognition
data
Prior art date
Application number
PCT/CN2020/105015
Other languages
English (en)
Chinese (zh)
Inventor
温桂龙
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2021128834A1 publication Critical patent/WO2021128834A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/30Interpretation of pictures by triangulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents

Definitions

  • the field of artificial intelligence navigation of this application in particular, relates to a navigation method, device, computer equipment, and storage medium based on computer vision.
  • Existing navigation systems generally have functions such as speech synthesis, text reading, zoom in and zoom out, and touch feedback, which provide users with convenience, help users plan routes and provide travel mode suggestions.
  • the inventor found that users with inconvenient eyes cannot perceive the real-time road conditions on the navigation route in real time, making them prone to danger when moving according to the navigation route. Users with inconvenient eyes here It can be a visually impaired user or a user who cannot concentrate on watching the real-time road conditions due to other reasons.
  • the embodiments of the present application provide a computer vision-based navigation method, device, computer equipment, and storage medium to solve the problem that users who are inconvenient to use the navigation route recommended by the existing navigation system are prone to danger when moving.
  • a navigation method based on computer vision including:
  • the navigation request information includes a starting point position and an ending point position
  • a computer vision tool is used to perform binocular distance measurement on the obstacle to determine the distance data between the user's current position and the obstacle;
  • the voice playback system is used to play the evasion reminder information.
  • a navigation device based on computer vision including:
  • a navigation request information acquisition module configured to acquire navigation request information, where the navigation request information includes a starting point position and an ending point position;
  • the first target route acquisition module is configured to perform route planning according to the starting point position and the ending point position, acquire the first target route, and play the navigation voice data corresponding to the first target route by using a voice playback system;
  • the current recognition result obtaining module is used to obtain the real-time video of the road conditions corresponding to the first target route, extract the image to be recognized from the real-time video of the road condition, preprocess the image to be recognized, obtain the target recognition image, and use the target
  • the obstacle recognition model recognizes the target recognition image and obtains the current recognition result
  • the distance data acquisition module is configured to, if the current recognition result is that there is an obstacle, use a computer vision tool to perform binocular distance measurement on the obstacle, and determine the distance data between the user's current position and the obstacle;
  • the evasion reminder information acquisition module is configured to obtain corresponding evasion reminder information according to the distance data and preset alarm conditions, and use the voice playback system to play the evasion reminder information.
  • a computer device includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor, wherein the processor implements the following steps when the processor executes the computer-readable instructions:
  • the navigation request information includes a starting point position and an ending point position
  • a computer vision tool is used to perform binocular distance measurement on the obstacle to determine the distance data between the user's current position and the obstacle;
  • the voice playback system is used to play the evasion reminder information.
  • One or more readable storage media storing computer readable instructions, where when the computer readable instructions are executed by one or more processors, the one or more processors execute the following steps:
  • the navigation request information includes a starting point position and an ending point position
  • a computer vision tool is used to perform binocular distance measurement on the obstacle to determine the distance data between the user's current position and the obstacle;
  • the voice playback system is used to play the evasion reminder information.
  • the above-mentioned computer vision-based navigation method, device, computer equipment and storage medium carry out route planning according to the starting point position and the ending point position, obtain the first target route, and use the voice playback system to play the corresponding first target route Navigation voice data in order to provide users with voice navigation, which is convenient for users to travel based on the navigation voice data they hear.
  • Acquire a real-time video of the road condition corresponding to the first target route extract the image to be recognized from the real-time video of the road condition, preprocess the image to be recognized, obtain a target recognition image, and use a target obstacle recognition model to recognize the target The image is recognized, and the current recognition result is obtained to determine whether there is an obstacle when the user advances along the first target route.
  • a computer vision tool is used to perform binocular distance measurement on the obstacle to quickly determine the distance data between the user's current position and the obstacle.
  • Pre-set alarm conditions based on the distance data obtain the corresponding evasion reminder information and use the voice playback system to play, so as to provide a barrier-free forward solution for users with eye inconvenience, avoiding the inconvenience of eyes or other inability to view road conditions in real time
  • Fig. 1 is a schematic diagram of an application environment of a computer vision-based navigation method in an embodiment of the present application
  • Fig. 2 is a flowchart of a computer vision-based navigation method in an embodiment of the present application
  • Fig. 3 is a flowchart of a computer vision-based navigation method in an embodiment of the present application
  • Fig. 4 is a flowchart of a computer vision-based navigation method in an embodiment of the present application.
  • Fig. 5 is a flowchart of a computer vision-based navigation method in an embodiment of the present application.
  • Fig. 6 is a flowchart of a computer vision-based navigation method in an embodiment of the present application.
  • Fig. 7 is a flowchart of a computer vision-based navigation method in an embodiment of the present application.
  • Fig. 8 is a functional block diagram of a navigation device based on computer vision in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of the principle of binocular ranging in an embodiment of the present application.
  • Fig. 10 is a schematic diagram of a computer device in an embodiment of the present application.
  • the computer vision-based navigation method provided by the embodiments of the present application can be applied to the application environment as shown in FIG. 1.
  • the computer vision-based navigation method is applied to a navigation system.
  • the navigation system includes a client and a server as shown in FIG. It is convenient for users with eyes to provide navigation and provide corresponding circumvention solutions to ensure user travel safety.
  • the client is also called the client, which refers to the program that corresponds to the server and provides local services to the client.
  • the client can be installed on, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • a computer vision-based navigation method is provided. Taking the method applied to the server in FIG. 1 as an example for description, the method includes the following steps:
  • S201 Acquire navigation request information, where the navigation request information includes a starting point position and an ending point position.
  • the navigation request information refers to the information that the user sends to the server through the client, and requests the server to plan the route according to the starting point and the ending point.
  • the starting point location is the location where the starting point of the navigation route is determined independently by the user.
  • the end position is the position where the end point of the navigation route needs to be determined independently by the user.
  • S202 Carry out route planning according to the starting point position and the ending point position, obtain the first target route, and use the voice playback system to play the navigation voice data corresponding to the first target route.
  • the first target route refers to a route from the starting point position to the ending position obtained by planning according to the navigation request information.
  • Navigation voice data refers to the voice data that provides navigation for the user.
  • the navigation voice data corresponds to the first target route.
  • the navigation voice data can be "please walk xx meters to the left and then turn right" or "you have deviated Route etc.”.
  • Voice playback system playback refers to a system used for voice playback.
  • the voice playback system can play the first target route.
  • the server After the server obtains the navigation request information, it inputs the start position and the end position in the navigation request information into the navigation system, and obtains the first target route fed back by the navigation system, and uses the voice playback system to play the first target route.
  • Corresponding navigation voice data so as to provide users with voice navigation, so that users who are inconvenient to use can obtain the corresponding first target route according to the played navigation voice data.
  • the route with the shortest walking time may be selected as the first target route.
  • S203 Obtain a real-time video of the road condition corresponding to the first target route, extract the image to be recognized from the real-time video of the road condition, preprocess the image to be recognized, obtain the target recognition image, use the target obstacle recognition model to recognize the target recognition image, and obtain the current recognition result.
  • the real-time video of road conditions refers to the video captured by the client in real time when the user is walking according to the navigation voice data.
  • the image to be recognized refers to the image that needs to be recognized.
  • the video image extraction software is used to extract the image to be recognized in the real-time video of the road condition.
  • the frequency of the image extraction using the video image extraction software can be to extract one image to be recognized in the real-time video of the road condition every 10 seconds; or
  • the image acquisition port extracts the image to be identified in the real-time video of the road condition.
  • the frequency of the image acquisition port can be 10 seconds to extract an image to be identified from the real-time video of the road condition.
  • the target recognition image refers to the image obtained by preprocessing the image to be recognized.
  • the target obstacle recognition model is a model used to recognize obstacle objects in an image.
  • the target obstacle recognition model is used to recognize the target recognition image, so as to determine whether there is an obstacle on the road that prevents the user from moving forward when the user walks along the first target route.
  • the current recognition result is the recognition result of the target recognition image by the target obstacle recognition model.
  • Obstacle objects refer to objects that hinder the user's progress when the user advances along the first target route.
  • the client's camera is turned on for video recording to obtain real-time video of the road condition, and the image to be recognized is extracted from the real-time video of the road condition by using video image extraction software or the image collection port, and the image to be recognized is grayed out.
  • the target recognition image use the target obstacle recognition model to recognize the target recognition image, and obtain the current recognition result of whether there may be obstacles when the user moves along the first target route, so that the subsequent avoidance can be performed based on the current recognition result. Handling of obstacles to ensure the safety of users’ travel.
  • Computer vision refers to machine vision that uses cameras and computers instead of human eyes to identify, track, and measure obstacles.
  • Computer vision tools include but are not limited to Halcon, MATLAB+Simulink and OpenCV.
  • the user's current location refers to the user's current location.
  • the distance data refers to the data of the distance between the user's current position and the obstacle. The distance data is specifically the distance between the three-dimensional coordinates of the obstacle and the three-dimensional coordinates of the user's current position.
  • the three-dimensional coordinates of the user's current position are the origin coordinates.
  • Binocular distance measurement refers to the process of calculating the image extracted from the real-time video of road conditions through computer vision tools to determine the distance between the user's current position and the obstacle.
  • this embodiment uses the OpenCV tool to calculate the image extracted from the real-time video of the road condition to quickly learn from the user’s current location to the obstacle.
  • the distance data of the object position When the user’s eyes are inconvenient, the distance data between the user and the obstacle can be calculated according to the computer vision tool, so that it can accurately determine whether the obstacle will hinder the user from moving forward, so that the corresponding evasion reminder information corresponding to the obstacle can be obtained later to provide data to ensure User travel safety.
  • S205 Obtain corresponding evasion reminder information according to the distance data and preset alarm conditions, and use a voice playback system to play the evasion reminder information.
  • the preset alarm condition refers to a preset alarm condition, and the alarm condition is set according to whether the obstacle will hinder the user from moving forward.
  • Evasion reminder information refers to the reminder information generated by judging distance data and preset alarm conditions. For example, when obstacles will not hinder the user, the evasion reminder message can be "Please pay attention to the presence of xx obstacles x meters in front of the left.
  • the avoidance reminder message can be "Please note that there is an xx obstacle at x meters from the front left, please stop", or the obstacle prevents the user from moving forward At the time, the avoidance reminder message may be "Please note that there is an xx obstacle at x meters in front of the left, request to change the first target route", etc.
  • the avoidance reminder information can provide users with a barrier-free forwarding solution, avoid the danger that may be caused by the inconvenience of the user's eyes and the inability to see the existing obstacles, and ensure the user's travel safety.
  • route planning is performed according to the starting point and the ending point, the first target route is obtained, and the voice playback system is used to play the navigation voice data corresponding to the first target route, so as to provide users with Voice navigation makes it convenient for users to travel based on the navigation voice data they hear.
  • Obtain the real-time video of the road condition corresponding to the first target route extract the image to be recognized from the real-time video of the road condition, preprocess the image to be recognized, obtain the target recognition image, use the target obstacle recognition model to recognize the target recognition image, and obtain the current recognition result, To determine whether there is an obstacle when the user moves along the first target route.
  • a computer vision tool is used to perform binocular distance measurement on the obstacle to quickly determine the distance data between the user's current position and the obstacle.
  • Preset warning conditions based on the distance data obtain the corresponding evasion reminder information and use the voice playback system to play, so as to provide a barrier-free forward plan for users with eye inconvenience, avoiding the situation that users with eye inconvenience or other situations where the user cannot view the road conditions in real time , The danger that may be caused by the inability to see the existing obstacles, to ensure the safety of users.
  • the navigation request information in step S201 is the information corresponding to the start position and the end position independently input by the user. Specifically, it may be the start position and the end position input by the user directly on the client by text input, or It is the start and end positions determined by automatic positioning technology, and the start and end positions can also be input by voice input. As shown in Figure 3, step S201, namely obtaining navigation request information, includes:
  • S301 Use the voice playback system to input reminder data at the playback position, and receive the to-be-recognized voice data input by the voice collection system based on the position reminder data.
  • the location input reminder data refers to the data issued by the voice playback system to remind the user to input the location.
  • the position input reminder data specifically includes the start position input reminder data and the end position input reminder data.
  • the start position input reminder data may be "please enter the start position".
  • the voice data to be recognized is the data that the user said contains the starting position or the ending position.
  • the voice collection system is a system used to collect user voice data, which can be a microphone built into the client.
  • the user can independently select the voice input mode through the client.
  • the voice playback system is used to play the position to input the reminder data.
  • the user enters the reminder data according to the position within the preset waiting time.
  • the voice collection system collects the voice data to be recognized and sends it to the server.
  • the preset waiting time is a preset time for waiting for user feedback data. For example, the preset waiting time may be 1 minute.
  • S302 Use a voice recognition model to recognize the voice data to be recognized, and obtain the target text.
  • the voice recognition model is a model that is pre-trained to recognize the text content in the voice data to be recognized.
  • the target text refers to the text corresponding to the voice data to be recognized, specifically the text corresponding to the start position or the end position.
  • the voice recognition model is used to recognize the voice data to be recognized, and the target text including the starting point or the ending point can be quickly obtained, so as to subsequently plan a route for the user.
  • S303 Use the speech synthesis technology to perform speech synthesis on the target text, and obtain the to-be-confirmed speech data corresponding to the target text.
  • speech synthesis technology is a technology that converts text information generated or input by a computer into speech output.
  • the voice data to be confirmed refers to the voice data obtained after speech synthesis processing is performed on the target text.
  • speech synthesis is performed on the target text to obtain the to-be-confirmed speech data corresponding to the target text for the user to determine whether the starting point or the ending point is accurate, so as to ensure the accuracy of the subsequently generated route.
  • S304 Use the voice playback system to play the voice data to be confirmed, receive the location confirmation information sent by the client, and determine the navigation request information based on the target text and the location confirmation information.
  • the position confirmation information refers to the information that the user confirms that the start position or the end position of the target text is accurate.
  • the voice playback system is used to play the voice data to be confirmed, and the location confirmation information sent by the client is received within a preset waiting time.
  • the location confirmation information may be confirmation information, that is, information used to confirm that the target text is accurate and correct. ; It can also be the confirmation of incorrect information, that is, the information that needs to be modified to confirm that the target text is inaccurate.
  • Determine the navigation request information based on the target text and location confirmation information including: if the location confirmation information is confirmed and correct, then the navigation request information is determined based on the target text; if the location confirmation information is confirmed to be incorrect, the voice playback system is repeated Enter the reminder data at the playback position, and receive the to-be-recognized voice data input by the voice collection system based on the position reminder data and the subsequent steps, that is, repeat steps S301-S304 until the correct information is obtained, and the navigation request information is determined according to the target text .
  • the user interacts with the client by means of human-computer interaction such as a voice playback system, so as to provide an intelligent location input method for users who are inconvenient with eyes, so as to plan a route later.
  • the voice playback system plays the position input reminder data that needs to be determined by the user, and receives the voice data to be recognized by the voice collection system, recognizes the voice data to be recognized, and obtains the target text in order to Follow up to plan the first target route for the user.
  • the speech synthesis technology is used to synthesize the target text to obtain the to-be-confirmed speech data corresponding to the target text, so that the user can determine whether the starting point or the ending point is accurate, so as to ensure the accuracy of the first target route generated subsequently.
  • Use the voice playback system to play the voice data to be confirmed receive the location confirmation information sent by the client, determine the navigation request information based on the target text and location confirmation information, and realize the interaction between the user and the client by means of human-computer interaction such as the voice playback system.
  • Users with limited eyesight provide an intelligent location input method for subsequent route planning.
  • step S203 that is, preprocessing the image to be recognized to obtain the target recognition image, includes:
  • S401 Perform grayscale and binarization processing on the image to be recognized, and obtain the image to be processed.
  • grayscale refers to the process of converting a color image to be recognized into a grayscale image to be recognized, so as to reduce the workload of subsequent image processing.
  • Binarization refers to processing the image obtained after the image to be identified is grayed out to generate an image with only two gray levels, and the image with only two gray levels is determined as the image to be processed. Perform grayscale and binarization processing on each image to be recognized to obtain the image to be processed to speed up the processing of subsequent images.
  • S402 Use an edge detection algorithm and a straight line detection algorithm to process the image to be processed, and obtain a road condition recognition image.
  • the edge detection algorithm is used to measure, detect and locate the gray level change of the image to be processed, so as to determine the part of the image to be recognized with significant brightness change, and provide technical support for the subsequent segmentation of obstacles and background.
  • the detection algorithm includes but is not limited to the Canny edge detection algorithm.
  • the straight line detection algorithm is an algorithm used to identify a straight line from the image to be processed.
  • the straight line detection algorithm includes but is not limited to the Hough transform.
  • the Hough transform is used to process the image to be processed, and the straight lines in the image to be processed are extracted to determine the sidewalk, blind side, or highway on the road, and obtain the road condition recognition image.
  • the edge detection algorithm is used to process the image to be processed to detect the parts with significant brightness changes in the image to be processed
  • the straight line detection algorithm is used to determine the road in the image to be processed, so as to efficiently identify the sidewalk and the blind on the road in the image to be processed And road conditions such as highways.
  • S403 Use a threshold selection method to segment the obstacle object and the background of the road condition recognition image, and obtain a target recognition image.
  • the threshold selection method refers to the process of using the gray level difference between the target and the background to be extracted in the image, and dividing the pixel level into several categories by setting the gray threshold to realize the separation of the target and the background.
  • the gray threshold is preset, and is used to distinguish obstacles from the background.
  • the target recognition image refers to the image obtained after processing the road condition recognition image. Specifically, it is an image determined based on the comparison result of the gray level difference between the obstacle object and the background extracted from the road condition recognition image and the gray threshold value.
  • the target recognition image is an image that is likely to be an obstacle.
  • Threshold selection methods include, but are not limited to, threshold selection methods based on genetic algorithms.
  • the threshold selection method is used to segment the road condition recognition image into the obstacle object and the background, and the part of the road condition recognition image whose gray value is greater than the gray threshold value is determined as the obstacle object, which has the advantage of small calculation amount and can be obtained quickly.
  • Target recognition image The gray level threshold is preset, and is used to distinguish the value of obstacles and background in the road condition recognition image.
  • the image to be recognized in the real-time video of road conditions is extracted, the image to be recognized is grayed and binarized, and the image to be processed is obtained to speed up the processing of subsequent images.
  • Use edge detection algorithm to process the image to be processed determine the part of the image to be processed with significant brightness changes, provide technical support for the subsequent segmentation of obstacles and background, and use straight line detection algorithm to process the image to be processed to efficiently identify the road surface Road conditions.
  • the threshold selection method is used to segment the obstacle object and the background of the road condition recognition image, which has the advantage of small calculation amount and can quickly obtain the target recognition image.
  • the target recognition image includes a left-eye recognition image and a right-eye recognition image.
  • a computer vision tool is used to perform binocular distance measurement on the obstacle to determine the distance between the user’s current position and the obstacle. Data, including:
  • S501 Use Zhang Zhengyou's calibration method to calibrate to obtain the parameter data of the binocular camera.
  • the binocular camera refers to the left and right cameras on the user client.
  • the distance data between the user's current position and the obstacle obtained by the binocular camera is more than the distance data between the user's current position and the obstacle obtained by the monocular camera. accurate.
  • the Zhang Zhengyou calibration method is a single-plane checkerboard camera calibration method proposed by Professor Zhang Zhengyou in 1998 to obtain the parameter data of the binocular camera.
  • the parameter data includes internal parameter data and external parameter data, the internal parameter data includes focal length and lens distortion parameters, and the external parameter data includes rotation matrix and translation matrix.
  • the binocular camera is used in advance to obtain multiple sets of calibration images at different angles and different distances, and then the Zhang Zhengyou calibration method is used to calibrate multiple sets of calibration images to obtain the parameter data of the binocular camera to identify the image and the right eye for the subsequent left eye. Recognize the image and provide technical support for image correction.
  • the calibration image refers to an image used for calibration, specifically an image used to calculate and determine the parameter data of the binocular camera.
  • the calibration image includes a left target image and a right target image.
  • the image to be recognized includes the left-eye original image and the right-eye original image.
  • the left-eye recognition image is the left-eye original image extracted from the real-time video of the road condition captured by the left camera, and then the left-eye original image is preprocessed.
  • the right-eye recognition image is the right-eye original image extracted from the real-time video of the road condition captured by the right camera, and then the right-eye original image is preprocessed to obtain the image. It should be noted that the left-eye recognition image and the right-eye recognition image must be images obtained from real-time video of road conditions at the same time to ensure the accuracy of the distance data calculated later.
  • S502 Perform image correction on the left-eye recognition image and the right-eye recognition image based on the parameter data, and obtain a left-eye correction image and a right-eye correction image.
  • image correction refers to the method of mapping and transforming the left-eye recognition image and the right-eye recognition image according to the parameter data, so that the polar line of the matching point on the left-eye recognition image and the right-eye recognition image is collinear, and the collinear epipolar line can be understood as the left-eye recognition
  • the matching points on the image and the right-eye recognition image are on the same horizontal line.
  • Image correction based on the parameter data of the binocular camera can ensure the accuracy of the subsequent calculation of the distance data between the user's current position and the obstacle, and effectively reduce the amount of calculation.
  • the matching point on the left-eye recognition image and the right-eye recognition image refers to the point at the same position of the same object in the left-eye recognition image and the right-eye recognition image, for example, a point on the left ear of the same user on the left-eye recognition image and the right-eye recognition image .
  • the left-eye correction image is an image obtained after correcting the left-eye recognition image.
  • the right-eye correction image is an image obtained after correcting the right-eye recognition image.
  • the left-eye recognition image and the right-eye recognition image obtained by the binocular camera have image distortion. If the left-eye recognition image and the right-eye recognition image are directly used to calculate the user’s current The distance data of the position and obstacle objects, there is a large error in the obtained distance data.
  • the parameter data obtained by calibration is input into OpenCV, and the affine transformation function of OpenCV is used to realize the mapping transformation processing on the left target image and the right target image.
  • the mapping transformation includes but is not limited to translation, rotation, and scaling.
  • the left-eye image mapping table reflects the left target image and after the mapping transformation
  • the right-eye image mapping table reflects the mapping relationship between the right target image and the sum and the right-eye correction image after the mapping transformation.
  • the left-eye recognition image is corrected according to the left-eye image mapping table to obtain the left-eye correction image.
  • the right-eye recognition image is corrected according to the right-eye image mapping table to obtain the right-eye correction image. Perform image correction on the left-eye recognition image and the right-eye recognition image to eliminate the influence of image distortion on the subsequent ranging and ensure the reliability of the subsequent calculation of the distance between the user's current position and the obstacle.
  • S503 Use a stereo matching algorithm to perform stereo matching on the left-eye corrected image and the right-eye corrected image to obtain a disparity map.
  • the disparity map refers to an image whose image size is equal to the size of any one of the left-eye correction image and the right-eye correction image, and the element value is the disparity value.
  • the disparity value is the difference between the x-coordinates corresponding to the same point or object imaged by the left-eye camera and the right-eye camera.
  • Stereo matching refers to finding matching pixels in the left-eye correction image and right-eye correction image, and using the positional relationship between the corresponding pixels to obtain a disparity map.
  • Stereo matching algorithms include, but are not limited to, the local BM algorithm and the global SGBM algorithm provided in OpenCV.
  • the stereo matching algorithm used in this embodiment is a global SGBM.
  • the idea of SGBM is to select the disparity of each pixel to form a disparity map, and set a global energy function related to the disparity map to minimize this energy function. To achieve the purpose of solving the optimal disparity of each pixel.
  • the stereo matching algorithm is used to select the disparity of the corresponding pixels in the left eye correction image and the right eye correction image to form a disparity map, set a global energy function related to the disparity map, and minimize the energy cost function to solve
  • the optimal disparity of each pixel, the optimal disparity of each pixel is used as the disparity value of the pixel to generate a disparity map, and then the distance data between the user's current position and the obstacle can be accurately calculated based on the disparity map.
  • S504 Determine the distance data between the current position of the user and the obstacle based on the disparity map.
  • the location of the obstacle is point P
  • the width of the left-eye camera and the right-eye camera is l
  • the focal length of the binocular camera is f
  • the distance between the left-eye camera and the right-eye camera is T
  • x l and x r represents the abscissa of the projection point of the obstacle in the left eye correction image and the right eye correction image
  • y r represents the ordinate of the projection point of the obstacle in the right eye correction image
  • the imaging point of the obstacle in the left eye camera is P l
  • the obstacle is in
  • the imaging point on the right-eye camera is P r
  • the Zhang Zhengyou calibration method is used for calibration to obtain parameter data of the binocular camera, which provides technical support for subsequent image correction of the left-eye recognition image and the right-eye recognition image.
  • the stereo matching algorithm is used to perform stereo matching on the left-eye correction image and the right-eye correction image to obtain a disparity map. According to the disparity map, the distance data between the user's current position and the obstacle can be accurately calculated, so as to provide the user with corresponding navigation based on the distance data.
  • the computer vision-based navigation method further includes:
  • S601 Obtain a training image and a test image, where the training image and the test image carry the type of obstacle and the tag of the obstacle.
  • the training image is an image used to train the neural network model to generate a target obstacle recognition model.
  • the test image is an image used to verify the original obstacle recognition model.
  • the obstacle type refers to the type of the object that hinders the user from moving forward.
  • the obstacle type may be a movable obstacle or a fixed obstacle.
  • Obstructive object tags are tags of objects that hinder the user from moving forward.
  • obstructive object tags may be people, dogs, bicycles, trees, and so on.
  • S602 Input the training image into the neural network model for training, and obtain the original obstacle recognition model.
  • the training image with the obstacle object type and obstacle object label is input into the neural network model.
  • the neural network model converges, the original obstacle recognition model is obtained, and the neural network model is trained to quickly identify the obstacle in the subsequent object.
  • S603 Input the test image into the original obstacle recognition model, and obtain the recognition accuracy rate output by the original obstacle recognition model.
  • the recognition accuracy refers to the probability that the original obstacle recognition model can accurately identify the type of obstacle and the tag of the obstacle in the test image.
  • the recognition accuracy rate of obtaining the original obstacle recognition model refers to the quotient of the original recognition result of the recognition accuracy and the number of images of all test images.
  • the preset accuracy threshold is preset, and is used to determine whether the original obstacle recognition model can accurately recognize the type of obstacle and the threshold of the obstacle tag.
  • the preset accuracy threshold may be 90%.
  • the recognition accuracy rate is greater than the preset accuracy threshold, it indicates that the original obstacle recognition model is successfully trained, and the original obstacle recognition model is determined as the target obstacle recognition model, so as to ensure that there are obstacles in the target recognition image according to the target obstacle recognition model. Ensure the accuracy of obstacle recognition.
  • the training image is input into the neural network model for training, and the original obstacle recognition model is obtained so as to quickly identify obstacle objects in the subsequent.
  • the test image is input into the original obstacle recognition model, and the recognition accuracy rate output by the original obstacle recognition model is obtained to verify whether the original obstacle recognition model is successful.
  • the recognition accuracy is greater than the preset accuracy threshold, the original obstacle recognition model is determined as the target obstacle recognition model, so as to ensure that there are obstacles in the target recognition image according to the target obstacle recognition model, and to ensure the accuracy of obstacle recognition.
  • the obstacle object also carries an obstacle object type, and the obstacle object type includes, but is not limited to, a fixed obstacle object and a movable obstacle object.
  • the corresponding evasion reminder information is obtained according to the distance data and preset alarm conditions, and the evasion reminder information is played by the voice playback system, including:
  • the genetic algorithm is used for path planning, the second target route is obtained, and the second target route is obtained.
  • the voice playback system is used to play the evasion reminder message.
  • the genetic algorithm is a computational model that simulates the biological evolution process of natural selection and genetic mechanism of Darwin's biological evolution theory, and is a way to search for the optimal solution by simulating the natural evolution process.
  • the server uses a genetic algorithm according to the user's current position and the end position. Perform path planning to obtain the second target route, use the second target route as the avoidance reminder information, and use the voice playback system to play the avoidance reminder information to the user, so that the user can walk without obstacles based on the avoidance reminder information, so that the user does not need to use his eyes directly To check, you can safely navigate to the end position, especially for users with eye inconvenience or other situations where the road conditions cannot be checked in real time. Based on the distance data and preset alarm conditions, the route is planned for the user to ensure the user's travel safety.
  • the obstacle may or may not move at this time, the user is first reminded to stop, and then the obstacle is detected. If an obstacle-free object is detected within the preset stop time, the first target line is used as the avoidance reminder message, and the voice playback system is used to play the avoidance reminder message to remind the user to continue walking; if the obstacle is detected within the preset stop time
  • genetic algorithm is used to plan the path, the second target route is obtained, the second target route is used as the avoidance reminder information, and the voice playback system is used to play the avoidance reminder information.
  • the voice playback system is used to play the continue walking information according to the distance the user walks.
  • the continue walking information can be "You have walked XX meters, please walk straight ahead XX Turn left after a meter, XX meters away from the target location"; or a reminder time threshold may be used.
  • the reminder time threshold may be 5 minutes.
  • the voice playback system is used to play the message of continuing walking.
  • the genetic algorithm is used for path planning based on the user's current position and the end position , Acquire the second target route, use the second target route as the avoidance reminder information, and use the voice playback system to play the avoidance reminder information.
  • the obstacle type is a fixed obstacle
  • the second target route is planned for the user as the avoidance reminder information at this time, So that users can walk without obstacles based on the avoidance reminder information, so that users can safely navigate to the end position without directly checking with their eyes, especially for users with eye inconvenience or other situations where the road conditions cannot be checked in real time, ensuring user travel safety .
  • the obstacle is detected, and the evasion reminder is generated based on the detection result, and the voice playback system is used to play the evasion reminder so that the user can follow the evasion
  • the reminder information can be walked without barriers, so that the user can safely navigate to the target location without directly viewing it with his eyes.
  • a computer vision-based navigation device corresponds to the computer vision-based navigation method in the above-mentioned embodiment in a one-to-one correspondence.
  • the computer vision-based navigation device includes a navigation request information acquisition module 801, a first target route acquisition module 802, a current recognition result acquisition module 803, a distance data acquisition module 804, and an avoidance reminder information acquisition module 805.
  • the detailed description of each functional module is as follows:
  • the navigation request information obtaining module 801 is used to obtain navigation request information, and the navigation request information includes a starting point position and an ending point position.
  • the first target route acquisition module 802 is used for route planning according to the starting point position and the ending point position, acquiring the first target route, and playing the navigation voice data corresponding to the first target route by using a voice playback system.
  • the current recognition result acquisition module 803 is used to acquire the real-time video of the road condition corresponding to the first target route, extract the image to be recognized from the real-time video of the road condition, preprocess the image to be recognized, obtain the target recognition image, and use the target obstacle recognition model to recognize the target The image is recognized and the current recognition result is obtained.
  • the distance data acquisition module 804 is configured to, if the current recognition result is that there is an obstacle, use a computer vision tool to perform binocular distance measurement on the obstacle, and determine the distance data between the user's current position and the obstacle.
  • the evasion reminder information acquisition module 805 is configured to acquire corresponding evasion reminder information according to the distance data and preset alarm conditions, and use a voice playback system to play the evasion reminder information.
  • the navigation request information acquisition module 801 includes: a location input reminder data playback unit, a target text acquisition unit, a speech synthesis unit, and a location confirmation information receiving unit.
  • the location input reminder data playback unit is used to use the voice playback system to play the location to input reminder data, and receive the voice data to be recognized based on the location reminder data input by the voice collection system.
  • the target text acquisition unit is used to recognize the voice data to be recognized by using a voice recognition model to obtain the target text.
  • the speech synthesis unit is used to synthesize the target text with speech synthesis technology, and obtain the to-be-confirmed speech data corresponding to the target text.
  • the location confirmation information receiving unit is used to play the voice data to be confirmed using the voice playback system, receive the location confirmation information sent by the client, and determine the navigation request information based on the target text and the location confirmation information.
  • the current recognition result acquisition module 803 includes: a to-be-processed image acquisition unit, a road condition recognition image acquisition unit, and a target recognition image acquisition unit.
  • the to-be-processed image acquisition unit is used to perform grayscale and binarization processing on the to-be-identified image to acquire the to-be-processed image.
  • the road condition recognition image acquisition unit is used to process the image to be processed by adopting the edge detection algorithm and the straight line detection algorithm to obtain the road condition recognition image.
  • the target recognition image acquisition unit is used to segment the obstacle object and the background of the road condition recognition image by using the threshold selection method to obtain the target recognition image.
  • the target recognition image includes a left-eye recognition image and a right-eye recognition image.
  • the distance data acquisition module 804 includes: a parameter data acquisition unit, an image correction unit, a disparity map acquisition unit, and a distance data determination unit.
  • the parameter data acquisition unit is used for calibration by Zhang Zhengyou calibration method to obtain parameter data of the binocular camera.
  • the image correction unit is used to perform image correction on the left-eye recognition image and the right-eye recognition image based on the parameter data, and obtain the left-eye correction image and the right-eye correction image.
  • the disparity map acquiring unit is used to perform stereo matching on the left-eye corrected image and the right-eye corrected image by using a stereo matching algorithm to obtain a disparity map.
  • the distance data determining unit is used to determine the distance data between the current position of the user and the obstacle based on the disparity map.
  • the computer vision-based navigation device further includes: training image and test image acquisition unit, original obstacle recognition model acquisition unit, recognition accuracy rate acquisition unit, and target obstacle recognition model determination unit.
  • the training image and test image acquisition unit is used to acquire the training image and the test image, and the training image and the test image carry the obstacle object type and the obstacle object label.
  • the original obstacle recognition model acquisition unit is used to input training images into the neural network model for training, and obtain the original obstacle recognition model.
  • the recognition accuracy rate acquisition unit is used to input the test image into the original obstacle recognition model to obtain the recognition accuracy rate output by the original obstacle recognition model.
  • the target obstacle recognition model determination unit is configured to determine the original obstacle recognition model as the target obstacle recognition model if the recognition accuracy rate is greater than the preset accuracy threshold.
  • the obstacle also carries the type of the obstacle;
  • the avoidance reminder information acquisition module 805 includes: a first judgment unit and a second judgment unit.
  • the first judging unit is configured to, if the distance data meets the preset warning condition, and the type of obstacle carried by the obstacle is a fixed obstacle, based on the user's current position and end position, the genetic algorithm is used to plan the path to obtain the second target route, The second target route is used as the evasion reminder information, and the voice playback system is used to play the evasion reminder information.
  • the second judging unit is used to detect the obstacle if the distance data meets the preset alarm condition and the obstacle type carried by the obstacle is a movable obstacle, generate an avoidance reminder message based on the detection result, and use the voice playback system to play the avoidance Reminder information.
  • the various modules in the above-mentioned computer vision-based navigation device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 10.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a readable storage medium and an internal memory.
  • the readable storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer readable instructions in the readable storage medium.
  • the database of the computer equipment is used to store evasion reminder information.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer-readable instructions are executed by the processor to realize a navigation method based on computer vision.
  • a computer device including a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor.
  • the processor executes the computer-readable instructions to implement the The steps of the computer vision navigation method, such as steps S201-S205 shown in FIG. 2, or the steps shown in FIG. 3 to FIG. 7, are not repeated here in order to avoid repetition.
  • the functions of the modules/units in the embodiment of the computer vision-based navigation device are implemented, for example, as shown in FIG. 802.
  • the functions of the current recognition result acquisition module 803, the distance data acquisition module 804, and the avoidance reminder information acquisition module 805 are not repeated here to avoid repetition.
  • one or more readable storage media storing computer readable instructions are provided.
  • the readable storage medium stores computer readable instructions.
  • the computer readable instructions are executed by a processor, the foregoing implementation is implemented.
  • the steps of the computer vision-based navigation method in the example such as steps S201-S205 shown in FIG. 2, or the steps shown in FIG. 3 to FIG. 7, are not repeated here to avoid repetition.
  • the processor executes the computer-readable instructions, the functions of the modules/units in the embodiment of the computer vision-based navigation device are implemented, for example, as shown in FIG. 802.
  • the functions of the current recognition result acquisition module 803, the distance data acquisition module 804, and the avoidance reminder information acquisition module 805 are not repeated here to avoid repetition.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention concerne un procédé et un appareil de navigation fondés sur la vision artificielle, un dispositif informatique et un support de stockage. Le procédé consiste : à obtenir un premier itinéraire cible et à utiliser un système de diffusion vocale pour diffuser des données vocales de navigation correspondant au premier itinéraire cible; à obtenir une vidéo de condition de route en temps réel correspondant au premier itinéraire cible, à extraire, de la vidéo de condition de route en temps réel, une image à identifier, à pré-traiter l'image à identifier de façon à obtenir une image d'identification cible, et à utiliser un modèle d'identification de barrière cible afin d'identifier l'image d'identification cible de façon à obtenir le résultat d'identification en cours; si le résultat d'identification en cours indique l'existence d'un objet barrière, à utiliser un outil de vision artificielle pour effectuer une mesure de distance binoculaire sur l'objet barrière de façon à déterminer des données de distance entre la position en cours d'un utilisateur et l'objet barrière; à obtenir des informations d'invite d'évitement correspondantes en fonction des données de distance et d'une condition d'alarme préétablie, à utiliser le système de diffusion vocale pour diffuser les informations d'invite d'évitement, et à planifier un itinéraire de navigation pour l'utilisateur, ce qui permet de garantir la sécurité de déplacement de l'utilisateur.
PCT/CN2020/105015 2019-12-25 2020-07-28 Procédé et appareil de navigation fondés sur la vision artificielle, dispositif informatique et support WO2021128834A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911356786.XA CN111060074A (zh) 2019-12-25 2019-12-25 基于计算机视觉的导航方法、装置、计算机设备及介质
CN201911356786.X 2019-12-25

Publications (1)

Publication Number Publication Date
WO2021128834A1 true WO2021128834A1 (fr) 2021-07-01

Family

ID=70303426

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/105015 WO2021128834A1 (fr) 2019-12-25 2020-07-28 Procédé et appareil de navigation fondés sur la vision artificielle, dispositif informatique et support

Country Status (2)

Country Link
CN (1) CN111060074A (fr)
WO (1) WO2021128834A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111060074A (zh) * 2019-12-25 2020-04-24 深圳壹账通智能科技有限公司 基于计算机视觉的导航方法、装置、计算机设备及介质
CN112083908B (zh) * 2020-07-29 2023-05-23 联想(北京)有限公司 一种模拟物体相对移动方向的方法和音频输出装置
CN111932792B (zh) * 2020-08-14 2021-12-17 西藏洲明电子科技有限公司 一种可移动的数据化存储机构
CN112902987B (zh) * 2021-02-02 2022-07-15 北京三快在线科技有限公司 一种位姿修正的方法及装置
CN113687810A (zh) * 2021-05-25 2021-11-23 青岛海尔科技有限公司 语音导航方法和装置、存储介质及电子设备
CN113419257A (zh) * 2021-06-29 2021-09-21 深圳市路卓科技有限公司 定位校准方法、装置、终端设备、存储介质及程序产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130258078A1 (en) * 2012-03-27 2013-10-03 Yu-Chien Huang Guide device for the blind
KR20180097962A (ko) * 2017-02-24 2018-09-03 전자부품연구원 영상 분석 기반 상황 판단형 가이드 장치 및 방법
CN108743266A (zh) * 2018-06-29 2018-11-06 合肥思博特软件开发有限公司 一种盲人智能导航避障出行辅助方法及系统
CN108844545A (zh) * 2018-06-29 2018-11-20 合肥信亚达智能科技有限公司 一种基于图像识别的辅助出行方法及系统
CN109059920A (zh) * 2018-06-29 2018-12-21 合肥信亚达智能科技有限公司 一种盲人交通出行安全监控智能导航方法及系统
CN111060074A (zh) * 2019-12-25 2020-04-24 深圳壹账通智能科技有限公司 基于计算机视觉的导航方法、装置、计算机设备及介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102389361B (zh) * 2011-07-18 2014-06-25 浙江大学 一种基于计算机视觉的盲人户外支援系统
CN102973395B (zh) * 2012-11-30 2015-04-08 中国舰船研究设计中心 一种多功能智能导盲方法、处理器及其装置
CA2950791C (fr) * 2013-08-19 2019-04-16 State Grid Corporation Of China Systeme de navigation visuelle binoculaire et methode fondee sur un robot electrique
CN105105992B (zh) * 2015-09-11 2017-09-22 广州杰赛科技股份有限公司 一种障碍物检测方法、装置及智能手表
CN106871906B (zh) * 2017-03-03 2020-08-28 西南大学 一种盲人导航方法、装置及终端设备
CN108618356A (zh) * 2018-07-05 2018-10-09 上海草家物联网科技有限公司 一种具有障碍物提醒功能的智能书包及其管理系统
CN109029452B (zh) * 2018-07-10 2020-02-21 深圳先进技术研究院 可穿戴导航设备及导航方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130258078A1 (en) * 2012-03-27 2013-10-03 Yu-Chien Huang Guide device for the blind
KR20180097962A (ko) * 2017-02-24 2018-09-03 전자부품연구원 영상 분석 기반 상황 판단형 가이드 장치 및 방법
CN108743266A (zh) * 2018-06-29 2018-11-06 合肥思博特软件开发有限公司 一种盲人智能导航避障出行辅助方法及系统
CN108844545A (zh) * 2018-06-29 2018-11-20 合肥信亚达智能科技有限公司 一种基于图像识别的辅助出行方法及系统
CN109059920A (zh) * 2018-06-29 2018-12-21 合肥信亚达智能科技有限公司 一种盲人交通出行安全监控智能导航方法及系统
CN111060074A (zh) * 2019-12-25 2020-04-24 深圳壹账通智能科技有限公司 基于计算机视觉的导航方法、装置、计算机设备及介质

Also Published As

Publication number Publication date
CN111060074A (zh) 2020-04-24

Similar Documents

Publication Publication Date Title
WO2021128834A1 (fr) Procédé et appareil de navigation fondés sur la vision artificielle, dispositif informatique et support
US11727593B1 (en) Automated data capture
US11900619B2 (en) Intelligent vehicle trajectory measurement method based on binocular stereo vision system
WO2022083402A1 (fr) Procédé et appareil de détection d'obstacle, dispositif informatique et support d'informations
TWI798305B (zh) 用於更新高度自動化駕駛地圖的系統和方法
Peng et al. A smartphone-based obstacle sensor for the visually impaired
WO2021056841A1 (fr) Procédé de positionnement, procédé et appareil de détermination de trajet, robot et support de stockage
JP2016029564A (ja) 対象検出方法及び対象検出装置
US20170003132A1 (en) Method of constructing street guidance information database, and street guidance apparatus and method using street guidance information database
EP2720193A2 (fr) Procédé et système pour détecter des irrégularités de la surface de la route
CA3083430C (fr) Marquage d'environnement urbain
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
US20230326247A1 (en) Information processing device
WO2022041869A1 (fr) Procédé et appareil d'invite d'état de route, et dispositif électronique, support de stockage et produit de programme
WO2022227761A1 (fr) Procédé et appareil de suivi de cible, dispositif électronique, et support de stockage
CN104123776A (zh) 一种基于图像的对象统计方法及系统
WO2020133488A1 (fr) Procédé et dispositif de détection de véhicule
CN106920260B (zh) 立体惯性导盲方法及装置和系统
US20210150745A1 (en) Image processing method, device, electronic apparatus, and computer readable storage medium
Gundewar et al. A review on an obstacle detection in navigation of visually impaired
KR20180068483A (ko) 도로표지판의 위치정보 데이터베이스 구축 시스템 및 그 방법과 이를 이용한 차량의 위치 추정 장치 및 그 방법
CN101964054A (zh) 基于视觉处理的盲道检测系统
CN107844749B (zh) 路面检测方法及装置、电子设备、存储介质
CN113469045B (zh) 无人集卡的视觉定位方法、系统、电子设备和存储介质
TWI451990B (zh) 車道定位系統、車道定位方法及路面標記

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20907472

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04/11/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20907472

Country of ref document: EP

Kind code of ref document: A1