WO2016047890A1 - Procédé et système d'aide à la marche, et support d'enregistrement pour mettre en oeuvre le procédé - Google Patents

Procédé et système d'aide à la marche, et support d'enregistrement pour mettre en oeuvre le procédé Download PDF

Info

Publication number
WO2016047890A1
WO2016047890A1 PCT/KR2015/005982 KR2015005982W WO2016047890A1 WO 2016047890 A1 WO2016047890 A1 WO 2016047890A1 KR 2015005982 W KR2015005982 W KR 2015005982W WO 2016047890 A1 WO2016047890 A1 WO 2016047890A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
obstacle
information
profile
edge
Prior art date
Application number
PCT/KR2015/005982
Other languages
English (en)
Korean (ko)
Inventor
한영준
한헌수
린칭
김민수
정환익
Original Assignee
숭실대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 숭실대학교산학협력단 filed Critical 숭실대학교산학협력단
Publication of WO2016047890A1 publication Critical patent/WO2016047890A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/08Devices or methods enabling eye-patients to replace direct visual perception by another kind of perception
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons

Definitions

  • the present invention relates to a walking assistance method and system, and a recording medium for performing the same. More particularly, the present invention relates to a method for detecting a obstacle by combining a depth information obtained by using a laser sensor and edge information of an image obtained by using a camera. The present invention relates to a walking assistance method and system for classifying, detecting and classifying obstacles, and providing walking assistance guide information according to a situation of a current walking environment, and a recording medium for performing the same.
  • the walking assistance method that helps the user who is visually impaired to walk detects an object in front of the user by using a laser scanner sensor and measures the distance to the detected object and provides the user to the user, or uses the camera in front of the user. Recognizes an existing object and measures the distance to the recognized object and provides it to the user.
  • the laser scanner sensor may have a high straightness and may miss a dangerous obstacle, and may not provide an outline of the obstacle because only one-dimensional distance information may be measured. Accordingly, there is a disadvantage in that accurate walking assistance information cannot be provided to the user.
  • the camera may incorrectly recognize the shadow as an obstacle, and when detecting an obstacle in direct sunlight, an object reflecting light, a strong light source in the rear, or a low light environment, direct sunlight, an object reflecting light, or a strong rear light Due to the influence of the light source or the low light environment, the image is distorted, and obstacles cannot be detected accurately. Accordingly, there is a disadvantage in that accurate walking assistance information cannot be provided to the user.
  • the obstacle information at a short distance is detected and classified through the data generated by fusing the distance information acquired by using the laser sensor and the image information acquired by the camera, the information on the detected and classified obstacle, and the walking state of the user. According to this situation, a walking assistance method for providing information on a direction and a situation in which the user proceeds is required.
  • a laser profile including depth information and an edge profile including edge information of an image are fused to generate a multi-modal profile, and a multi-modal profile is used to detect and classify obstacles in a short distance.
  • a walking assistance method and system for detecting and classifying obstacles in a remote location using an edge profile, and providing walking assistance information required by a user according to the walking state of the user and information on the detected and classified obstacles, and for performing the same. Provide a record carrier.
  • a walk assistance method includes generating a laser profile model including depth information obtained by using a laser sensor, and generating an edge profile model including edge information extracted from an image obtained by using an image sensor. Generate, generate a multimodal profile model by fusing the laser profile model and the edge profile model, detect and classify an obstacle present at a short distance from the multimodal profile model, and an obstacle present at a distance from the edge profile model. Detects and classifies the symbol, recognizes the situation in front of the user who is assisted with the walking by using the information on the detected and classified obstacle, and provides information on the direction and the situation to proceed according to the situation information in front of the user.
  • Generating the laser profile model may classify the laser profile according to whether the values of the depth information obtained by using the depth information are similar to distinguish the laser profile of the ground from the laser profile of the obstacle.
  • Categorizing the laser profile according to similarity of the obtained depth information may include calculating a difference value between the obtained depth information values, and if the calculated difference value is less than a predetermined reference value, The laser profile having the value may be classified into the same laser profile, and if the calculated difference value is equal to or greater than a predetermined reference value, the laser profile having the value of the obtained depth information may be classified into a different laser profile.
  • the generating of the multimodal profile model may generate the multimodal profile model by matching a coordinate point of depth information included in the laser profile model with a corresponding point of an image included in the edge profile model.
  • Detecting and classifying obstacles that exist at a short distance from the multi-modal profile model may include determining whether there is an edge profile connected to the peripheral eight directions of the point where the laser profile model and the edge profile model match when generating the multi-modal profile model. If the detected edge profile exists, vertical and horizontal histograms may be calculated in the region where the edge profile is detected to detect and classify obstacles existing in the near distance.
  • Categorizing the obstacle present in the near field may include calculating a width of the obstacle through a horizontal histogram of the laser profile model, calculating a height of the obstacle through a vertical histogram of the edge profile model, The obstacle may be classified by using the height to infer the size and shape of the obstacle.
  • Detecting and classifying obstacles that exist at a distance from the edge profile model includes extracting a vertical edge component from the acquired image, top-view transforming the extracted image of the vertical edge component, and performing a morphology calculation on the top-view transformed image.
  • the labeling may be performed to classify the edge blob which is the obstacle region, obtain an ellipse from the edge blob through elliptic approximation, and calculate the direction of the ellipse to classify the obstacle according to the direction of the ellipse.
  • the optical flow to the obtained image A vector and an extended Kalman filter are applied to predict the movement of the user, and recognize the situation of the walking environment including the walking state and the walking direction of the user from the movement of the user and information about the detected and classified obstacles. According to the situation of the pedestrian environment, it is possible to provide information on the direction and the situation to proceed.
  • the computer program may be a computer readable recording medium having recorded thereon a computer program for providing information to assist a user walking.
  • a walk assistance system includes a laser scanner that detects depth information of a space located in front of the camera, a camera that photographs the space located in front of the camera, and a laser profile model using depth information detected by the laser scanner. Generate an edge profile, extract edge information from an image captured by the camera, generate an edge profile model including the edge information, fuse the laser profile and the edge profile model, and generate a multimodal profile model; Detect and classify obstacles located in front of the multi-modal profile model, and detect and classify obstacles that exist in the distance from the edge profile model, and use the information on the detected and classified obstacles in front of the user who is assisted with walking. Recognize the situation before the user Depending on the situation, and a walking guide apparatus that provides information on the walking direction and the situation.
  • the multi-modal profile fusion of the laser profile obtained through the laser scanner and the edge profile obtained through the camera by detecting the obstacle located at a short distance of the obstacle as well as the position information of the obstacle It is possible to obtain the shape information can provide a more accurate walk assistance service.
  • FIG. 1 is a diagram illustrating a walking assistance system according to an exemplary embodiment of the present invention.
  • FIG. 2 is a block diagram of the walking guide apparatus shown in FIG. 1.
  • 3A and 3B illustrate an example of the multi-modal profile generator shown in FIG. 2.
  • FIG. 4 is a diagram illustrating a method of detecting an obstacle using a multi-modal profile.
  • 5A, 5B, 5C, 5D, and 5E are diagrams for explaining a method for detecting a remote obstacle.
  • FIG. 6 is a diagram illustrating an example of an obstacle classification unit that classifies obstacles in a short distance.
  • FIG. 7 is a diagram illustrating a method of elliptical approximation of an edge blob.
  • 8A and 8B are diagrams for explaining that the shape of an obstacle at a distance when the top view is changed.
  • FIG. 9 is a diagram illustrating a method of calculating the directionality of an ellipse.
  • FIG. 10 is a diagram illustrating an example of an obstacle classification unit classifying obstacles at a distance.
  • FIG. 11 is a flowchart illustrating a control method of a walking assistance system according to an exemplary embodiment of the present invention.
  • FIG. 12 is a flowchart illustrating a control method of a walking assistance system according to an exemplary embodiment of the present invention.
  • FIG. 1 is a view showing a walking assistance system according to an embodiment of the present invention
  • Figure 2 is a block diagram of the walking guidance device shown in Figure 1
  • Figures 3a, 3b is a multi-modal profile generation shown in FIG. 4
  • FIG. 4 is a diagram illustrating an example of a part
  • FIG. 4 is a diagram illustrating a method of detecting an obstacle using a multi-modal profile.
  • the walking assistance system 1 detects and classifies obstacles in front of the user who is provided with the walking assistance guide service of the walking assistance system 1, and predicts the movement of the user to be optimal for the user. Can provide walking assistance services.
  • a walking assistance system 1 may include a laser scanner 100, a camera 200, and a walking guide device 300.
  • the laser scanner 100 may detect distance information on an object (eg, a ground or an obstacle) in front of the user by transmitting a laser in front of the user, that is, in front of the user. .
  • an object eg, a ground or an obstacle
  • the laser scanner 100 may transmit a single laser pulse in front of the user.
  • the laser scanner 100 may detect an object in front of the user by receiving a laser returning the reflected laser pulse reflected on the surface of the object present within the sensor range of the laser scanner 100.
  • the laser scanner 100 may calculate a distance from the laser scanner 100 to an obstacle by measuring a time when the transmitted laser pulse is reflected on the surface of the object within the sensor range of the laser scanner 100 and returning. .
  • the laser scanner 100 may be mounted on one side of the user's body where the movement of the user's waist or the user's walking is relatively small.
  • the camera 200 may be mounted at a position close to the laser scanner 100 to photograph the front of the user.
  • the camera 200 may capture a surrounding image at a near and far distance of the user.
  • the short range area may be an area within 3 m from the camera 200
  • the far area may be an area in the range of 3 to 15 m from the camera 200.
  • the ranges of the near and far areas can be adjusted, set by the system manufacturer from the time of product release, and set by the user.
  • the walking guidance apparatus 300 may predict the obstacles in front of the user and the user's movement by using the information acquired through the laser scanner 100 and the camera 200, and the information on the obstacles in front of the user and By using the information on the movement of the user can provide an optimal walk assistance service for the current walking situation of the user.
  • the walking guide apparatus 300 includes a laser profile generator 310, an edge profile generator 320, a multi-modal profile generator 330, an obstacle detector 340, and an obstacle classifier 350. And a guide information extractor 360.
  • the laser profile generator 310 may generate the laser profile by obtaining depth information using distance information between the object in front of the user acquired from the laser scanner 100 and the laser scanner 100.
  • the laser profile generator 310 may receive distance information on each point of the object in front of the user.
  • the laser profile generator 310 may acquire depth information about an object in front of the user using the received distance information.
  • the depth information includes distance information that can indicate how close the laser scanner 100 is to the object in front of the user, so that each point of the user and the object in front of the user is close to the laser scanner 100. It may be an indicator. For example, if there is a pillar standing on the ground and the ground in front of the user, the pillar standing on the ground will be closer to the user than the ground when viewed horizontally. At this time, the depth information indicates that the pillar is closer than the ground.
  • the laser profile generator 310 may generate a laser profile including the acquired depth information.
  • the laser profile since the laser profile includes depth information for each point of the object in front of the user, the laser profile may be a set of depth information for each point of the object in front of the user.
  • the laser profile generator 310 may extract location information (eg, two-dimensional coordinates (x, y)) of each point of the object in front of the user.
  • the laser profile generator 310 may match the depth information of each point of the object in front of the user with the position information of each point of the object in front of the user.
  • the laser profile generator 310 may generate a laser profile by using depth information of each point of the object in front of the user and information matching the location information of each point of the object in front of the user.
  • the laser profile generator 310 may classify the laser profile using the acquired depth information.
  • the laser profile generator 310 may classify the laser profile using depth information obtained to accurately distinguish objects (eg, ground and obstacles) in front of the user. In this case, the laser profile generator 310 may calculate a difference between the acquired depth information. The laser profile generator 310 may detect whether a difference value between the acquired depth information is equal to or greater than a predetermined reference value (or threshold value). The laser profile generator 310 may classify coordinate points having the acquired depth information into the same laser profile if the difference value between the acquired depth information is less than a predetermined reference value, and the difference value between the acquired depth information. If it is equal to or more than the predetermined predetermined reference value, the coordinate point having the acquired depth information can be classified into another laser profile.
  • a predetermined reference value or threshold value
  • the difference between the acquired depth information is calculated to be 0.5 cm.
  • the first coordinate point and the second coordinate point may be included in the same laser profile (eg, the first laser profile).
  • the depth information of the first coordinate point is 10cm
  • the depth information of the second coordinate point is 12cm
  • the predetermined constant reference value is 1cm
  • the first coordinate point and the second coordinate point are different laser profiles (for example, the first coordinate point is the first laser profile, the second coordinate point).
  • the coordinate point can be classified into a second laser profile.
  • the edge profile generator 320 may generate an edge profile including edge information extracted by extracting an edge from a surrounding image in front of the user acquired through the camera 200.
  • the edge profile generator 320 may convert the surrounding image into a gray image.
  • the edge profile generator 320 may extract an edge by using the ⁇ 1 0 1 mask on the gray image.
  • the -1 0 1 mask may be a sobel edge, a prewitt edge, and a Roberts edge. If the mask is capable of extracting an edge from a gray image, the mask other than the above-listed masks may be used. Of course, other masks may be applied.
  • the edge profile generator 320 may generate an edge profile by matching the extracted edge with the location information (eg, two-dimensional coordinates (x, y)) of the extracted edge.
  • location information eg, two-dimensional coordinates (x, y)
  • the multi-modal profile generator 330 may generate a multi-modal profile by fusing the laser profile generated by the laser profile generator 310 and the edge profile generated by the edge profile generator 320.
  • the multi-modal profile generator 330 uses location information of each point of the object in front of the user included in the laser profile and location information of the edge included in the edge profile.
  • the laser profile and the edge profile can be fused.
  • the coordinate point included in the laser profile and the coordinate point of the edge (edge) included in the edge profile is matched to detect a point corresponding to the coordinate point included in the laser profile and the coordinate point of the edge included in the edge profile. can do.
  • the multi-modal profile generator 330 may match the laser profile and the edge profile according to the detected point as shown in FIG. 3B.
  • the multimodal profile in which the laser profile and the edge profile are fused may include edge information and depth information on coordinate points of each edge.
  • the obstacle detector 340 may detect an obstacle in a short distance of the user by using the multimodal profile generated by the multimodal profile generator 330.
  • the obstacle detector 340 may detect an obstacle based on a point where the laser profile and the edge profile match when the multimodal profile is generated. As illustrated in FIG. 4, the obstacle detector 340 may detect whether there is an edge profile that is connected in one shape with respect to the peripheral eight directions of the point where the laser profile and the edge profile match. If there is an edge profile connected in one shape in the peripheral eight directions of the point where the laser profile and the edge profile match, the obstacle detection unit 340 is connected in one of the eight directions around the point where the laser profile and the edge profile match. Bins may be generated for regions where edge profiles exist. The obstacle detector 340 may generate histograms in the vertical and horizontal directions with respect to the region in which the bin is generated.
  • the obstacle detector 340 detects a region having a high histogram with respect to the vertical direction of the region where the bin is generated, and detects a region having a high histogram with respect to the horizontal direction of the region where the bin is generated. can do.
  • the obstacle detector 340 may detect a region where a high histogram value overlaps with a high histogram value in a vertical direction. At this time, the edge profile connected in one shape is more likely to be an obstacle in front of the user. Thus, the area where the histogram is high in the vertical and horizontal directions is detected in the area where the obstacle is located in one shape. Can mean.
  • the obstacle detection unit 340 may detect an area in which an area having a high histogram value overlaps as an obstacle area and remove the background area from the obstacle area by performing clustering on the detected obstacle area. Only can be detected.
  • the obstacle detector 340 may determine a predetermined search order (for example, the left side of the image) when there is no edge profile connected in one shape in the peripheral eight directions of the point where the laser profile and the edge profile are matched. From the top to the bottom right, you can search for edge profiles that are connected in one shape for the surrounding eight directions of the registration points, which are then ordered: In this case, the predetermined search order may be set from the product release by the system manufacturer, and may be set by the user.
  • a predetermined search order for example, the left side of the image
  • FIG. 5A, 5B, 5C, 5D, and 5E are diagrams for explaining a method for detecting a remote obstacle.
  • FIG. 6 is a diagram illustrating an example of an obstacle classification unit classifying obstacles at a short distance
  • FIG. 7 is a diagram illustrating a method of elliptic approximation of an edge blob
  • FIGS. 8A and 8B are remote views at the time of the top view transformation.
  • FIG. 9 is a diagram illustrating a method of calculating an ellipse's directionality
  • FIG. 10 is an example of an obstacle classification unit classifying an obstacle at a distance. The figure is shown.
  • the obstacle detector 340 may detect an obstacle at a far distance by using an edge profile.
  • the obstacle detector 340 may extract only the vertical edge component of the surrounding image in front of the user from the edge profile. As illustrated in FIG. 5C, the obstacle detector 340 may convert the image obtained by extracting only the vertical edge component to the top view transform. At this time, the top view conversion of the surrounding image is because the camera attached to the user's body acquires the image information on the front side, so it is difficult to accurately detect the distance and direction to the obstacle in the distance. Accordingly, the obstacle detecting unit 340 converts the camera's point of view from the front to a top-view point of view looking down from the vertical in order to accurately detect information on the distance and direction to the obstacle at a distance. Can be. In this case, since the method of converting the front view to the top view is disclosed in Korean Patent No. 1342124, a detailed description thereof will be omitted.
  • the obstacle detector 340 performs an opening technique that continuously performs erosion and expansion operations to remove noise from the top-view converted image, and conversely, to connect an edge obtained from one object.
  • Morphological operations such as a closing technique that continuously performs expansion and erosion operations, can be performed.
  • the obstacle detection unit 340 may detect an obstacle at a long distance by applying a labeling to the image from which the noise is removed and detecting edge blobs connected to the same edge as shown in FIG. 5E.
  • the obstacle classification unit 350 may classify the obstacle according to the size or shape of the obstacle detected in the near and far distances.
  • the obstacle classifying unit 350 may classify the obstacle in the short range detected using the multi-modal profile.
  • the obstacle classification unit 350 may infer the size or shape of the obstacle by calculating the width and height of the detected obstacle using the multi-modal profile.
  • the width and height of the obstacle may be calculated through the horizontal histogram of the laser profile included in the multi-modal profile
  • the height of the obstacle may be calculated through the vertical histogram of the edge profile included in the multi-modal profile.
  • the obstacle classification unit 350 may classify the obstacle into two or three steps as shown in FIG. 6. First, the obstacle classification unit 350 may classify the first stage by using the depth information of the obstacle.
  • the obstacle classifying unit 350 may classify the obstacle based on the detected obstacle and the depth information included in the laser profile of the ground on the horizontal line.
  • the obstacle classification unit 350 may classify the object as positive if the depth value of the obstacle is a positive value based on the detected information on the detected obstacle and the depth information included in the laser profile of the ground on the horizontal line. If it is, it can be classified as ground. If the depth value of obstacle is negative, it can be classified as negative.
  • the obstacle classification unit 350 may classify the obstacle according to the shape of the obstacle. In this case, the shape of the obstacle may be inferred according to the calculated ratio of the width to the height of the obstacle.
  • a high ratio of width to height of an obstacle classified as positive means that it is wider than its height, so the shape of the obstacle is a planar like lying on the floor. Since the height is high compared to the obstacle shape may be a vertical shape such as standing on the floor.
  • the obstacle classified as ground means the ground literally, there is no shape to classify, and the obstacle classified as negative may be classified according to the width of the obstacle. Obstacles classified as negative can be classified as holes if their width is negligibly narrow compared to their height, and can be classified as n-curb if their width is similar to their height or greater than the width of the obstacle classified as a hole. If it is larger than the height, it can be classified as a drop-off like an inclined surface.
  • the obstacle classifying unit 350 may classify the obstacle in the distance detected using the edge profile.
  • the obstacle classifying unit 350 detects an ellipse by ellipsing the edge blob detected by the obstacle detecting unit 340, and detects an obstacle at a long distance using the detected short-to-large-axis ratio of the ellipse and the direction guide of the ellipse. .
  • the obstacle classifying unit 350 may ellipse an edge blob detected by the obstacle detecting unit 340.
  • the ellipse approximation may be calculated using the moments of the pixels included in the edge blob, and the rotation rate, the long axis, and the short axis length of the ellipse may be calculated through Equation 1.
  • Is the rotation rate of the ellipse Is the length of the long axis of the ellipse, Is the length of the short axis of the ellipse.
  • Is an image moment for a certain area R which is defined by Equation 2.
  • the obstacle classification unit 350 may detect whether the obstacle is standing on the ground or lying down by analyzing the distortion of the image generated during the top view conversion.
  • an object having a predetermined height or more may be liable to fall in the radial direction from the camera 200.
  • FIG. 8A when a viewpoint of an image photographing an object having a certain height, such as a bar or a person, is converted into a top view, the horizontal edge of the object is parallel to the ground so that there is no change, but the vertical edge of the object is directed toward the ground. Can increase. On the contrary, referring to FIG.
  • the obstacle classification unit 350 may analyze the screen distortion phenomenon during the top view conversion to distinguish whether the obstacle in the distance is standing or lying on the ground. Referring to FIG. 9, the obstacle classifying unit 350 may determine the angle of the straight line formed by the original position of the camera 200 and the center of the ellipse (obstacle) and the direction of the ellipse (obstacle) in the top view image through [Equation 3].
  • DRO (Deviation form Radial Orientation), which indicates the degree of distortion of the screen during the top view transformation, may be calculated. At this time, if the DRO value is greater than or equal to a predetermined value, it may be determined that the obstacle is standing on the ground. If the DRO value is less than the predetermined value, it may be determined that the obstacle is lying on the ground.
  • Is the coordinate of the center point of the i-ellipse Represents coordinates of the original position of the camera 200, Is the angle between the major axis of the i-th ellipse and the horizontal plane.
  • the obstacle classification unit 350 may classify the obstacle at a distance using the DRO value and the long axis to short axis ratio of the ellipse. Referring to FIG. 10, when the DRO value is greater than or equal to a predetermined value, the obstacle sorting unit 350 may classify the obstacle into a shape that is erected on the ground, that is, vertically. It can be classified as a planar, a shape lying on the side. The obstacle classification unit 350 may classify the obstacle using the long axis to short axis ratio as a next step. The obstacle sorting unit 350 may classify the long pole-to-shorten ratios of the vertically classified obstacles into poles having a long shape. It can be classified into blocks, which are shapes.
  • the obstacle classification unit 350 may be classified as curbs representing the sidewalk block if the long axis-to-short ratio among the obstacles classified as planar is more than a predetermined predetermined ratio, and the flat pile if the long-axis-short ratio is less than the predetermined predetermined ratio. It can be classified into piles representing.
  • the guide information extractor 360 may extract optimal walking assistance guide information suitable for a walking environment including information about a user's movement and an obstacle in front of the current user.
  • the guide information extractor 360 may predict the motion of the current user by using the light flow vector.
  • the guide information extractor 360 may set a region of interest at the bottom center of the image and calculate a light flow vector of the region of interest to predict the movement of the user. .
  • the guide information extractor 360 divides nine zones after setting the ROI, calculates an optical flow vector of each corresponding zone, and minimizes an error rate of the optical flow vectors by calculating an average of the calculated optical flow vectors.
  • the guide information extractor 360 may recognize the current situation of the current user through the information on the obstacle obtained through the obstacle detector 340 and the obstacle classifier 350 and the user's movement obtained using the optical flow vector. have. For example, if an obstacle is located 10m away from the user's 10 o'clock position and the user is currently walking toward 12 o'clock, the user is far from the obstacle and if the user is walking in the direction of progress, it is unlikely to collide with the obstacle. The walking state can be detected as a safe state. If an obstacle exists at a position 10m away from the user's 10 o'clock direction and the user is currently walking to the 10 o'clock direction, the user's walking is present because the obstacle exists in the user's traveling direction but is far away.
  • Ecology can be detected in a normal state, and if there is an obstacle at a position 3m away from the user's 12 o'clock direction and the user walks in the 12 o'clock direction, the user's walking state can be detected as a dangerous state.
  • the guide information extractor 360 may detect information about a direction in which the user is going according to the current situation of the user.
  • the guide information extractor 360 when the guide information extractor 360 is hit by an obstacle in front of the user when the user proceeds in the direction of progress, the user must proceed in order not to hit an obstacle in front of the user by applying an extended Kalman filter.
  • the direction can be detected.
  • the Extended Kalman filter can predict the position to which the user will move after a certain time, it can predict whether the object will not hit an obstacle in front of the user if the user rotates at an angle based on the current progress direction.
  • the guide information extractor 360 may extract optimal walking assistance guide information suitable for the current situation of the user.
  • the guide information extractor 360 applies the extended Kalman filter to the user's current progress direction, That is, it is possible to detect whether or not the position is similar to the obstacle position when moving 5m by turning 10 ° clockwise with respect to 12 o'clock, and if the movement is not similar to the position of obstacle when moving 5m by turning 10 ° clockwise,
  • the optimal walking direction according to the situation can be detected in a direction rotated by 10 ° clockwise from the current direction.
  • the guide information extractor 360 may generate a sentence having a predetermined pattern by extracting words corresponding to the current walking environment of the user.
  • the guide information extractor 360 may define words corresponding to the walking state of the user.
  • the walking state of the user is divided into three stages, a safe state, a normal state, a dangerous state, and the guide information extracting unit 360 is information about the direction to proceed, the user's operation and the position of the obstacle if the user's walking state is a safe state.
  • a word including information about a direction and a user's motion may be defined.
  • the walking direction corresponds to the direction to proceed.
  • the word to be defined as "current direction” or "12 o'clock” the word corresponding to the user's action can be defined as "to go the same as the present” or “to go to the average step”
  • the obstacle of The word corresponding to the position may be defined as "rod", “10 o'clock” and "10m”.
  • the word corresponding to the direction to proceed is expanded.
  • the direction detected by the Kalman filter for example, "11 o'clock” may be defined, and the word corresponding to the user's motion may be defined as "quickly visible", and the word corresponding to the position of the obstacle is "bar”. , “12 o'clock” and “10 m”.
  • the word corresponding to the direction to proceed is a direction detected by the extended Kalman filter, for example, It can be defined as "10 o'clock” and the word corresponding to the user's motion can be defined as "quickly go” or "stop".
  • the guide information extractor 360 may generate a sentence having a predetermined pattern by inserting words defined according to the walking state of the user.
  • the guide information extracting unit 360 defines a word defined when the user's walking state is a safe state, that is, "current progress direction”, “go to the average step”, “rod”, “10 o'clock direction”, and "10m” In the preset sentence and say “(Bar) is in front of (10m) at (10 o'clock).
  • the walking assistance system 1 may output a guide message including a sentence generated from the guide information extractor 360.
  • the walking assistance system 1 may output a guide message by using an output module (not shown) provided in the walking guide device 300, and an earphone (not shown) or a headset (not shown) mounted on a user's ear. You can output the guide message through.
  • a laser profile including depth information about each point is generated using the distance information of each point acquired through the laser scanner 100 (410).
  • an edge profile including edge information is generated by extracting the edges in the vertical and horizontal directions from the surrounding image captured by the camera 200 (415).
  • the multi-modal profile is generated by fusing the generated laser profile and the edge profile (420).
  • the fusion of the laser profile and the edge profile may fuse the laser profile and the edge profile so that the edge profile corresponding to the coordinate point of the laser profile is matched with the detected detection point.
  • edge profiles connected in one shape are detected 425 and 435 in the eight directions around the matched point, an obstacle is detected by calculating a histogram in an area where the edge profiles connected in one shape are detected (440). ).
  • the histogram is calculated in the vertical and horizontal directions to the region where the edge profile connected in one shape is detected, and the region where the histogram value is high in the vertical direction and the region where the histogram value is high in the horizontal direction overlap. It may be detected as an obstacle area, and only an obstacle except a background area may be detected in the obstacle area detected through clustering.
  • the width of the obstacle is calculated through the horizontal histogram of the laser profile, and the height or height of the obstacle is calculated by the vertical histogram of the edge profile to infer the size or shape of the obstacle (445).
  • the width-to-height ratio may be calculated using the calculated width and height of the obstacle, and the width-to-height ratio may be used to detect whether the obstacle is standing on the ground or lying on the ground.
  • a large width-to-height ratio can be inferred into a flat shape, such as a pile or a low sidewalk block, and a small width-to-height ratio can be inferred to an elongated shape such as a rod or a person.
  • the obstacle is classified 450 according to the size or shape of the inferred obstacle, and the user's movement is predicted by applying the optical flow vector and the extended Kalman filter (455).
  • the information on the current direction of the user can be detected by using the optical flow vector, and in which direction the current user moves using the extended Kalman filter to avoid collisions with obstacles. Information can be detected.
  • the optimal walking direction is calculated using information on the detected and classified obstacles and information on the user's movement (460).
  • a walking assistance guide message including the calculated optimal walking direction and obstacle information is generated and output.
  • the walking assistance system 1 may recognize the current walking state of the user by using information about the movement and obstacles of the user, and generate a walking assistance guide message by defining words corresponding to the recognized walking state. .
  • an edge profile including edge information is generated by extracting edges in the vertical and horizontal directions from the surrounding image captured by the camera 200 (510).
  • the image photographed from the front view is converted into the top view view (520).
  • the distortion since the horizontal edge is parallel to the ground, the distortion does not occur when converted to the top view, but the vertical edge may occur when the vertical edge is converted to the top view.
  • the walking assistance system 1 of the present invention may analyze this distortion phenomenon to detect whether an obstacle is standing on the ground or lying down.
  • noise is removed that may affect the obstacle detection, and labeling is performed to detect obstacles by using the property that one object is connected to the same edge.
  • the edge blob including the obstacle is extracted (525).
  • an ellipse approximation is performed on the extracted edge blob to obtain an ellipse from the pixels forming the edge blob (530).
  • the length of the ellipse representing the degree of change of the ellipse direction when the top view is transformed by calculating the lengths of the obtained ellipses and the length of the short axis and changing the ellipse of the ellipse due to the distortion phenomenon generated when converting to the top view point of view. Is detected (535).
  • the ratio of the long axis to the short axis is calculated using the calculated long axis and the short axis length, and the obstacle is classified according to the calculated long axis to short axis ratio and the directionality of the ellipse (540).
  • optical flow vector and the extended Kalman filter are applied to predict the motion of the user (545).
  • the information on the current direction of the user can be detected by using the optical flow vector, and in which direction the current user moves using the extended Kalman filter to avoid collisions with obstacles. Information can be detected.
  • an optimal walking direction is calculated using information on the detected and classified obstacles and information on the user's movement.
  • a walking assistance guide message including the calculated optimal walking direction and obstacle information is generated and output.
  • the walking assistance system 1 may recognize the current walking state of the user by using information about the movement and obstacles of the user, and generate a walking assistance guide message by defining words corresponding to the recognized walking state. .
  • Such a technology for detecting an obstacle in a short distance and a long distance and predicting a user's movement to provide an optimal walk assistance service is implemented in the form of program instructions that can be implemented as an application or executed through various computer components. It can be recorded on a computer readable recording medium.
  • the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
  • the program instructions recorded on the computer-readable recording medium are those specially designed and configured for the present invention, and may be known and available to those skilled in the computer software arts.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical recording media such as CD-ROMs, DVDs, and magneto-optical media such as floptical disks. media), and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • the hardware device may be configured to operate as one or more software modules to perform the process according to the invention, and vice versa.

Abstract

L'invention concerne un procédé et un système d'aide à la marche, et un support d'enregistrement pour mettre en oeuvre le procédé. Le procédé d'aide à la marche consiste à générer un profil laser comprenant des informations de profondeur pour chaque point d'un objet qui est devant un utilisateur ; à extraire des bords d'une image acquise à l'aide d'une caméra de façon à générer un profil de bord comprenant des informations de bord sur l'image ; à générer un profil multimodal en fusionnant le profil laser généré et le profil de bord généré ; à détecter et classer les obstacles situés à une courte distance à l'aide du profil multimodal généré ; à prédire un déplacement de l'utilisateur à l'aide du profil multimodal ; à reconnaître un environnement de marche de l'utilisateur à l'aide d'informations relatives aux obstacles détectés et classés, et d'informations relatives au déplacement de l'utilisateur ; et à fournir un service de guidage d'aide à la marche à l'utilisateur en fonction de l'environnement de marche reconnu.
PCT/KR2015/005982 2014-09-26 2015-06-15 Procédé et système d'aide à la marche, et support d'enregistrement pour mettre en oeuvre le procédé WO2016047890A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2014-0129563 2014-09-26
KR20140129563 2014-09-26

Publications (1)

Publication Number Publication Date
WO2016047890A1 true WO2016047890A1 (fr) 2016-03-31

Family

ID=55581385

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/005982 WO2016047890A1 (fr) 2014-09-26 2015-06-15 Procédé et système d'aide à la marche, et support d'enregistrement pour mettre en oeuvre le procédé

Country Status (1)

Country Link
WO (1) WO2016047890A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156751A (zh) * 2016-07-25 2016-11-23 上海肇观电子科技有限公司 一种向目标对象播放音频信息的方法及装置
CN110826512A (zh) * 2019-11-12 2020-02-21 深圳创维数字技术有限公司 地面障碍物检测方法、设备及计算机可读存储介质
CN112556687A (zh) * 2020-12-08 2021-03-26 广州赛特智能科技有限公司 一种机器人启动定位方法、系统、电子设备及存储介质
WO2022068193A1 (fr) * 2020-09-30 2022-04-07 深圳市商汤科技有限公司 Dispositif portable, procédé et appareil de guidage intelligents, système de guidage et support de stockage
CN116434346A (zh) * 2023-06-12 2023-07-14 四川汉唐云分布式存储技术有限公司 无人值守商店内顾客行为的检测方法、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060071507A (ko) * 2004-12-22 2006-06-27 주식회사 팬택앤큐리텔 보행 보조 장치 및 방법
KR20100111543A (ko) * 2009-04-07 2010-10-15 주식회사 만도 차량 인식 방법 및 장치
KR20120034352A (ko) * 2010-10-01 2012-04-12 한국전자통신연구원 장애물 감지 시스템 및 방법
KR20130011608A (ko) * 2011-07-22 2013-01-30 에스케이플래닛 주식회사 표본 프로파일 정보 기반 움직임 추정장치 및 방법
KR101428403B1 (ko) * 2013-07-17 2014-08-07 현대자동차주식회사 전방 장애물 검출 장치 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060071507A (ko) * 2004-12-22 2006-06-27 주식회사 팬택앤큐리텔 보행 보조 장치 및 방법
KR20100111543A (ko) * 2009-04-07 2010-10-15 주식회사 만도 차량 인식 방법 및 장치
KR20120034352A (ko) * 2010-10-01 2012-04-12 한국전자통신연구원 장애물 감지 시스템 및 방법
KR20130011608A (ko) * 2011-07-22 2013-01-30 에스케이플래닛 주식회사 표본 프로파일 정보 기반 움직임 추정장치 및 방법
KR101428403B1 (ko) * 2013-07-17 2014-08-07 현대자동차주식회사 전방 장애물 검출 장치 및 방법

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIN, QING ET AL.: "A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model", SENSORS, vol. 14, pages 18670 - 18700 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156751A (zh) * 2016-07-25 2016-11-23 上海肇观电子科技有限公司 一种向目标对象播放音频信息的方法及装置
CN106156751B (zh) * 2016-07-25 2019-05-07 上海肇观电子科技有限公司 一种向目标对象播放音频信息的方法及装置
CN110826512A (zh) * 2019-11-12 2020-02-21 深圳创维数字技术有限公司 地面障碍物检测方法、设备及计算机可读存储介质
CN110826512B (zh) * 2019-11-12 2022-03-08 深圳创维数字技术有限公司 地面障碍物检测方法、设备及计算机可读存储介质
WO2022068193A1 (fr) * 2020-09-30 2022-04-07 深圳市商汤科技有限公司 Dispositif portable, procédé et appareil de guidage intelligents, système de guidage et support de stockage
CN112556687A (zh) * 2020-12-08 2021-03-26 广州赛特智能科技有限公司 一种机器人启动定位方法、系统、电子设备及存储介质
CN112556687B (zh) * 2020-12-08 2023-04-07 广州赛特智能科技有限公司 一种机器人启动定位方法、系统、电子设备及存储介质
CN116434346A (zh) * 2023-06-12 2023-07-14 四川汉唐云分布式存储技术有限公司 无人值守商店内顾客行为的检测方法、装置及存储介质
CN116434346B (zh) * 2023-06-12 2023-08-18 四川汉唐云分布式存储技术有限公司 无人值守商店内顾客行为的检测方法、装置及存储介质

Similar Documents

Publication Publication Date Title
WO2016047890A1 (fr) Procédé et système d'aide à la marche, et support d'enregistrement pour mettre en oeuvre le procédé
WO2011052826A1 (fr) Procédé de création et d'actualisation d'une carte pour la reconnaissance d'une position d'un robot mobile
WO2019225817A1 (fr) Dispositif d'estimation de position de véhicule, procédé d'estimation de position de véhicule et support d'enregistrement lisible par ordinateur destiné au stockage d'un programme informatique programmé pour mettre en œuvre ledit procédé
Porikli Trajectory distance metric using hidden markov model based representation
WO2017030259A1 (fr) Véhicule aérien sans pilote à fonction de suivi automatique et son procédé de commande
CN111753797B (zh) 一种基于视频分析的车辆测速方法
US8238607B2 (en) System and method for detecting, tracking and counting human objects of interest
WO2011052827A1 (fr) Dispositif et procédé de détection de glissement pour robot mobile
WO2011013862A1 (fr) Procédé de commande pour la localisation et la navigation de robot mobile et robot mobile utilisant un tel procédé
WO2012011713A2 (fr) Système et procédé de reconnaissance de voie de circulation
US20060067562A1 (en) Detection of moving objects in a video
JP2009143722A (ja) 人物追跡装置、人物追跡方法及び人物追跡プログラム
WO2015105239A1 (fr) Système et procédé de détection de positions de véhicules et de voise
WO2020036295A1 (fr) Appareil et procédé d'acquisition d'informations de conversion de coordonnées
JP2006251596A (ja) 視覚障害者支援装置
WO2020159076A1 (fr) Dispositif et procédé d'estimation d'emplacement de point de repère, et support d'enregistrement lisible par ordinateur stockant un programme informatique programmé pour mettre en œuvre le procédé
CN106503632A (zh) 一种基于视频分析的自动扶梯智能安全监测方法
Chuang et al. Carried object detection using ratio histogram and its application to suspicious event analysis
KR20190051128A (ko) 머신러닝 기법을 이용한 행동인지 기반 보행취약자 검출 방법 및 시스템
WO2019147024A1 (fr) Procédé de détection d'objet à l'aide de deux caméras aux distances focales différentes, et appareil associé
WO2016209029A1 (fr) Système d'auto-guidage optique à l'aide d'une caméra stéréoscopique et d'un logo et procédé associé
WO2012011715A2 (fr) Système d'avertissement de collision de véhicules et son procédé
WO2020171605A1 (fr) Procédé de fourniture d'informations de conduite et serveur de fourniture de carte de véhicules et procédé associé
WO2020067751A1 (fr) Dispositif et procédé de fusion de données entre capteurs hétérogènes
Snaith et al. A low-cost system using sparse vision for navigation in the urban environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15844709

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15844709

Country of ref document: EP

Kind code of ref document: A1