CN115218918A - Intelligent blind guiding method and blind guiding equipment - Google Patents

Intelligent blind guiding method and blind guiding equipment Download PDF

Info

Publication number
CN115218918A
CN115218918A CN202211141077.1A CN202211141077A CN115218918A CN 115218918 A CN115218918 A CN 115218918A CN 202211141077 A CN202211141077 A CN 202211141077A CN 115218918 A CN115218918 A CN 115218918A
Authority
CN
China
Prior art keywords
library
sensitive target
layout
library position
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211141077.1A
Other languages
Chinese (zh)
Other versions
CN115218918B (en
Inventor
石岩
李华伟
陈忠伟
王益亮
邓辉
沈锴
陆蕴凡
陈丁
李虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiangong Intelligent Technology Co ltd
Original Assignee
Shanghai Xiangong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiangong Intelligent Technology Co ltd filed Critical Shanghai Xiangong Intelligent Technology Co ltd
Priority to CN202211141077.1A priority Critical patent/CN115218918B/en
Publication of CN115218918A publication Critical patent/CN115218918A/en
Application granted granted Critical
Publication of CN115218918B publication Critical patent/CN115218918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means

Landscapes

  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Pain & Pain Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Automation & Control Theory (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an intelligent blind guiding method and blind guiding equipment, wherein the method comprises the following steps: step S100, correspondingly establishing a library position layout for the collected environment image frames, and carrying out azimuth alignment on the library position layout; step S200, identifying a sensitive target in an environment image frame, and acquiring a coordinate of the sensitive target in an image coordinate system; step S300 determines the area in the library layout where the sensitive target is located, so as to calculate the orientation of the space where the sensitive target is located. Therefore, real environmental information around the blind person is provided to a certain extent, the blind person can know the direction information of the sensitive target in the environment, and the blind person can conveniently and effectively respond according to the information so as to improve the safety of the user.

Description

Intelligent blind guiding method and blind guiding equipment
Technical Field
The invention relates to a machine vision technology, in particular to an intelligent blind guiding method and blind guiding equipment suitable for target direction identification.
Background
The problem of difficulty in traveling of the blind is always a topic which is widely concerned by society, and the difficulty in guiding the blind caused by complex environment is one of the main reasons. For this reason, the applicant has proposed many blind guiding schemes, such as "positioning method, device, electronic device and storage medium of intelligent blind guiding stick" (patent application No. 2021101699225. X), which proposes to receive satellite data transmitted by intelligent blind guiding stick; receiving differential data sent by a base station; and carrying out real-time dynamic carrier phase difference processing based on the satellite data and the difference data to obtain the positioning position of the intelligent blind guiding stick so as to eliminate public errors influencing the positioning accuracy, realize high-accuracy positioning and further realize high-accuracy blind guiding navigation.
However, such technologies have problems in that the walking route of the user is determined according to the satellite positioning, if there is a problem in signal transmission with the satellite, such as in poor signal reception or indoor environment, it will bring inconvenience to the user and even safety problems, and in addition, only the macroscopic information of the route cannot help the user to understand the exact surrounding environment, and cannot know the specific road conditions in the surrounding environment like normal people, so that it cannot bring sufficient safety to the user.
Therefore, there is a need in the art for a blind guiding scheme to provide the blind with information that can understand the surrounding environment,
disclosure of Invention
Therefore, the main objective of the present invention is to provide an intelligent blind guiding method and blind guiding device, so as to provide the blind with information that the blind can know the surrounding environment of the blind.
In order to achieve the above object, according to one aspect of the present invention, there is provided an intelligent blind guiding method, comprising the steps of:
step S100, correspondingly establishing a library position layout for the collected environment image frames, and carrying out azimuth alignment on the library position layout;
step S200, identifying a sensitive target in an environment image frame, and acquiring a coordinate of the sensitive target in an image coordinate system;
step S300 determines the region in the library position layout where the sensitive target is located, so as to calculate the position of the space where the sensitive target is located.
In a possible preferred embodiment, the library location layout establishing step includes:
step S110, establishing a gradually outward diffused bin ring by taking a camera as a center; dividing the library position rings by a sector of a preset angle so as to establish each library position area on each library position ring;
step S120, recording coordinates of the image coordinate system where the corner points of each library position area are located, and establishing numbers for each library position area.
In a possible preferred embodiment, the step of aligning the library bit layout comprises:
step S130, adjusting the radius of each library position ring until the radius is aligned with the position of the real world;
step S140, establishing a mapping relation between each library location area and the distance in the corresponding real world;
step S150 establishes a mapping relationship between each bin region and the direction in the corresponding image coordinate system.
In a possible preferred embodiment, the step of acquiring the coordinates of the sensitive object in the image coordinate system includes:
step S210 extracts coordinates of the recognition box from the recognition information of the sensitive object to calculate coordinates of the recognition box under the image coordinate system where each corner point is located.
In a possible preferred embodiment, the step of determining the area in the library position layout where the sensitive target is located to calculate the spatial orientation where the sensitive target is located includes:
step S310, calculating Euclidean distances from each corner point of the sensitive target enclosing frame to the library position layout center;
step S320 screens out the corner points nearest to the center of the library layout, and calculates the library region where the corner points are located, so as to obtain the distance in the corresponding real world and the direction in the corresponding image coordinate system.
In order to achieve the above object, corresponding to the above intelligent blind guiding method, in another aspect of the present invention, an intelligent blind guiding device is further provided, which includes:
the storage unit is used for storing the program of any one of the intelligent blind guiding method steps, so that the control unit, the storage location management unit, the identification unit, the processing unit and the information output unit can be called and executed timely;
wherein the control unit is configured to coordinate:
the panoramic camera is used for acquiring an environmental image frame;
the storage position management unit is used for establishing a corresponding storage position layout according to the environment image frame and adjusting the storage position layout to align the orientation;
the identification unit is used for identifying the type of the sensitive target in the environmental image frame and acquiring the coordinate of the sensitive target in an image coordinate system;
the processing unit is used for judging the area of the sensitive target in the library position layout so as to calculate the spatial azimuth information of the sensitive target;
and the information output unit is used for showing the type of the sensitive target and the information of the spatial orientation where the sensitive target is located.
In a possible preferred embodiment, the step of establishing the library location layout by the library location management unit comprises: establishing a gradually outward diffused reservoir ring in a view finding range by taking the panoramic camera as a center; dividing the library position rings by sectors with preset angles to establish each library position area on each library position ring; and recording coordinates of the corner points of each library position area under an image coordinate system, and establishing numbers for each library position area.
In a possible preferred embodiment, the step of adjusting the library position layout for orientation alignment by the library position management unit includes: and gradually adjusting the radius of each library position ring until the radius is aligned with the position of the real world, then establishing a mapping relation between each library position area and the distance in the corresponding real world, and simultaneously establishing a mapping relation between each library position area and the direction in the corresponding image coordinate system.
In a possible preferred embodiment, the step of acquiring the coordinates of the sensitive object in the image coordinate system by the identification unit includes: the identification unit establishes an enclosure frame for the sensitive target and records coordinates of each corner point of the enclosure frame under an image coordinate system.
In a possible preferred embodiment, the step of determining, by the processing unit, an area in the library location layout where the sensitive object is located to calculate the spatial orientation where the sensitive object is located includes: after calculating the Euclidean distance from each corner point of the sensitive target enclosing frame to the library position layout center by the processing unit; and screening out the corner points closest to the center of the library position layout, and calculating the library position area where the corner points are located so as to obtain the distance of the corner points in the corresponding real world and the direction of the corner points in the corresponding image coordinate system.
The intelligent blind guiding method and the blind guiding device provided by the invention can provide real environmental road conditions around the blind person to a certain extent, and help the user to know the position information of the sensitive targets in the environment, such as where the blind road is, the position of the vehicle, the position of the barrier relative to the blind person, the position of the pedestrian relative to the blind person and the like, so that the user can know the types of the sensitive targets nearby and the positions and approximate distances of the sensitive targets, and the user is helped to know the environment of the walking road section as much as normal people. On the other hand, because the scheme does not depend on the blind guiding scheme of the traditional satellite positioning technology, and simultaneously, the information for guiding blind is richer, the method is convenient for a user to effectively deal with the blind according to the information, and simultaneously, the safety and the reliability of the blind guiding are higher.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic diagram illustrating basic steps of an intelligent blind guiding method according to the present invention;
fig. 2 is a schematic diagram of an initial library location layout of the intelligent blind guiding method of the present invention;
FIG. 3 is a schematic diagram of an image coordinate system in the intelligent blind guiding method of the present invention;
FIG. 4 is a schematic diagram of a structure of a library location area of the intelligent blind guiding method of the present invention;
FIG. 5 is a schematic diagram of an actual library site layout structure of the intelligent blind guiding method of the present invention;
FIG. 6 is a schematic diagram illustrating an example of computing a bin location area where a sensitive target is located in the intelligent blind guiding method according to the present invention;
fig. 7 is a schematic structural diagram of an intelligent blind guiding method and blind guiding equipment according to the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following will clearly and completely describe the specific technical solution of the present invention with reference to the embodiments to help those skilled in the art to further understand the present invention. It should be apparent that the embodiments described herein are only a few embodiments of the present invention, and not all embodiments. It should be noted that the embodiments and features of the embodiments in the present application can be combined with each other without departing from the inventive concept and without conflicting therewith by those skilled in the art. All other embodiments based on the embodiments of the present invention, which can be obtained by a person of ordinary skill in the art without any creative effort, shall fall within the disclosure and the protection scope of the present invention.
Furthermore, the terms "first," "second," "S1," "S2," and the like in the description and claims of the present invention and in the drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the invention described herein may be practiced in sequences other than those described. Also, the terms "including" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. Unless expressly stated or limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in this case can be understood by those skilled in the art in combination with the prior art as the case may be.
The blind person needs to be guided because of the lack of information of the surrounding environment, so that judgment basis cannot be provided for the action of the blind person, and therefore the blind person guiding method is intended to help the blind person to identify and obtain the space information of a specific sensitive target object in the environment, so that judgment basis is provided for the action of the blind person.
Therefore, referring to fig. 1 to fig. 6, the intelligent blind guiding method provided by the present invention includes the following steps:
step S100 is to establish a library position layout for the collected environment image frames correspondingly, and align the library position layout in azimuth.
Specifically, the steps of establishing the library layout and performing the orientation alignment include:
step S110, establishing a gradually outward diffused bin ring by taking a camera as a center; the library position rings are divided into sectors of preset angles to establish each library position area on each library position ring.
Step S120, recording coordinates of the image coordinate system where the corner points of each library position area are located, and establishing numbers for each library position area.
Step S130 adjusts the radius of each bin ring until it is aligned with the real-world location.
Step S140 establishes a mapping relationship between each library location area and a distance in the corresponding real world.
Step S150 establishes a mapping relationship between each bin region and the direction in the corresponding image coordinate system.
Specifically, for example, as shown in fig. 2, it is preferable to adopt a panoramic camera in the present application, and to capture an environmental image within 360 ° around a user as an information identification area. In the present invention, although the library layout is fixed within the screen presented by the panoramic camera, the camera is moving and the screen is changed, so that the library layout is not changed with respect to the position in the image coordinate system but is changed with respect to the real world coordinate system.
The computational power required by the whole method in the scheme can be operated on nvidia jetson, the equipment is equivalent to a computer, and the equipment is characterized by very small volume and can provide unusual cpu and gpu computational power. Therefore, it can decode the video stream of the camera to obtain an image of a frame, and the image coordinate system is as shown in fig. 3, and each image has a coordinate system with the vertex at the top left corner of the image as the origin to represent the position of each pixel point, and right is a positive x-axis, and down is a positive y-axis.
Further, for example, two numbers symmetrical to each other are connected by taking the clock dial center as the center of a circle by using the clock pattern as the basic template, such as 12 and 6, 9 and 3, and 6 pairs of numbers are connected.
Then, in order to better distinguish the directions, each line can be rotated clockwise by 15 degrees, and at the moment, the ground of the surrounding environment is divided into 12 fan-shaped equal parts by taking the user as the center of a circle, which represent 12 directions;
in addition, a distance factor needs to be added, and similarly, for example, a circle is drawn by taking the panoramic camera as a center and taking the distance of one meter, two meters, three meters, four meters, five meters and six meters as a radius, in this case, besides the circle with the radius of 1 meter where the user is located, 5 circles with the interval distance of 1 meter are obtained, and different circles indicate that the distance from the user is different.
And finally, drawing 5 circles by using the cv2.Circle of the opencv open source visual library with the panoramic camera as the center, and drawing corresponding 12 lines by using cv2.Line to form an initial library position as shown in FIG. 2, wherein the initial library position is not a real library position layout, but provides a template for subsequently establishing a formal library position layout.
And then, after the positioning panoramic camera determines the position of the panoramic camera, positioning needs to be carried out by combining the initial library position and the actual distance. As shown in fig. 2, in addition to the smallest circle where the user is located, 5 circles are distributed in sequence, so that the circumference of the smallest circle is adjusted to be located 1 meter away from the user in the real world, the outer ring of the first circle is located two meters away from the user in the real world, and so on until all the circles are adjusted to be aligned with the real world distance.
It should be noted that the distance of each ring can be adjusted according to the situation of the user, and the above examples in this case are only used to illustrate the implementation principle of the technology, and do not limit the number of the rings, so that those skilled in the art can adjust the number of the rings according to the actual situation to increase or decrease the requirement of the detection accuracy without departing from the technical concept of the present invention.
Whereby the approximate distance to the user can be determined based on which ring the object falls in, it follows that the distance can be located more accurately if the distance between each ring is smaller.
Further, the initial library position is converted into a library position layout, as shown in fig. 4, the concept of the present invention is to draw an approximate trapezoid in each half-sector area according to four corner points thereof, where the trapezoid is a formal library position area, and take half-sector 12-3 as an example, where in fig. 4, the half-sector is an original shape, and the trapezoid is a final library position area shape, such as an isosceles trapezoid.
In this form, such an isosceles trapezoid is drawn in each small sector, for example, using opencv on the image, a callback function mouse used by setmousecall is defined, which, when a mouse left click event (cv2. Event _ LBUTTONDOWN) is captured, obtains the coordinates of the clicked pixel point, creates a solid circle with a radius of 1, i.e., draws a solid point on the click, and displays and records the coordinates of the solid point in the graph coordinate system where the solid point is located.
Thus, each isosceles trapezoid is drawn, the coordinates of each point are recorded while the trapezoid is displayed, and each trapezoid is numbered as M-n, where M represents the direction and n represents the distance, e.g., 12-3, i.e., 12 points, from the user by 3 meters, thereby identifying the bin region, and finally forming the bin layout shown in fig. 5.
Thus, each bin field has a number and also coordinates of four points, i.e. position information representing it.
Furthermore, in a use scene, the panoramic camera can be installed on equipment such as a blind guiding stick or a blind guiding robot of a user, so that after library position layout is initialized, library position areas are generated on a camera picture by taking the user as a circle center, coordinate values of four vertexes of the library position areas are stored on edge equipment connected with the camera, the edge equipment is equivalent to a micro server (computer), camera images can be displayed, and library position information can also be stored in a memory or hardware, then library position layout azimuth alignment is carried out, namely, the radius of each ring of an upper image is adjusted according to real ground distance information, the distance of a real world is aligned with the distance of the library position areas, and the positions of the library position areas on the aligned images are stored on the edge equipment, so that a library position layout image which can move along with the user and can represent the real distance information is generated;
step S200 identifies the sensitive target in the environmental image frame, and obtains the coordinate of the sensitive target in the image coordinate system.
The step of acquiring the coordinates of the sensitive target in the image coordinate system comprises the following steps:
step S210 extracts coordinates of the recognition box from the recognition information of the sensitive object to calculate coordinates of the recognition box under the image coordinate system where each corner point is located.
Specifically, after the position layout is aligned, a target recognition link is allowed to be performed, at this time, an image acquired by the panoramic camera is sent to a recognition unit, and after target detection, the type of a sensitive target object is obtained, meanwhile, in the existing target detection technology, a yolo technology, such as yolo v5, is usually adopted for recognition, so that after a sensitive target is recognized, such a technology automatically marks a recognition frame (a GT frame and a prediction frame) around the recognized sensitive target, and the scheme utilizes the recognized sensitive targetIdentification information carried by the object itself
Figure 202206DEST_PATH_IMAGE001
,(
Figure 384925DEST_PATH_IMAGE002
,
Figure 571187DEST_PATH_IMAGE003
),(
Figure 763134DEST_PATH_IMAGE004
,
Figure 984031DEST_PATH_IMAGE005
),
Figure 337652DEST_PATH_IMAGE006
](i.e., the
Figure 339106DEST_PATH_IMAGE001
Representing the class of the sensitive target object, ((ii))
Figure 941601DEST_PATH_IMAGE002
,
Figure 141638DEST_PATH_IMAGE003
),(
Figure 807106DEST_PATH_IMAGE004
,
Figure 92594DEST_PATH_IMAGE005
) The coordinates of two points of the top left corner vertex and the bottom right corner vertex of the minimum recognition frame of the target object are represented,
Figure 236130DEST_PATH_IMAGE006
representing the confidence of the sensitive target object) and then the coordinates of the image coordinate system where the 4 corner points of the recognition frame are located can be calculated.
In addition, since the present application exemplifies the use of yolov5 for object detection, which is a prior art, how to perform the deep learning and the object detection process can be performed with reference to the prior art, and the present application only exemplifies that the following are preferred: the blind road, traffic lights, ponding, pedestrian, dog, cat, car, bus, step, bicycle, trolley-bus, freight train, zebra crossing etc. carry out corresponding study training and discernment as sensitive target under the scene of the basic characteristic screening of environment under the daily trip scene to provide necessary environmental information for the blind person's trip.
Step S300 determines the region in the library position layout where the sensitive target is located, so as to calculate the position of the space where the sensitive target is located.
The method for judging the area in the library position layout of the sensitive target to calculate the spatial orientation of the sensitive target comprises the following steps:
step S310, calculating Euclidean distances from each corner point of the sensitive target enclosing frame to the library position layout center;
step S320 screens out the corner points nearest to the center of the library layout, and calculates the library region where the corner points are located, so as to obtain the distance in the corresponding real world and the direction in the corresponding image coordinate system.
Specifically, in this case, each bin region is drawn to generate coordinates of four vertexes, and 60 half-sector bin regions can be obtained according to 12 directions and 5 annular regions, where the coordinates of four vertexes of each bin region are known as follows: (1)
Figure 493936DEST_PATH_IMAGE007
,
Figure 454939DEST_PATH_IMAGE008
),(
Figure 103089DEST_PATH_IMAGE009
,
Figure 909371DEST_PATH_IMAGE010
),(
Figure 959367DEST_PATH_IMAGE011
,
Figure 497796DEST_PATH_IMAGE012
),(
Figure 757876DEST_PATH_IMAGE013
,
Figure 243215DEST_PATH_IMAGE014
) Representing, and each bin field defining a number m-n, and identifying information of a group Q of sensitive target objects obtained by the unit (yolov 5), provided that Q =1,2,3
Figure 272351DEST_PATH_IMAGE015
,(
Figure 450522DEST_PATH_IMAGE016
,
Figure 135582DEST_PATH_IMAGE017
),(
Figure 549245DEST_PATH_IMAGE018
,
Figure 311184DEST_PATH_IMAGE019
),
Figure 784890DEST_PATH_IMAGE020
],
Figure 629350DEST_PATH_IMAGE015
Representing the class of the target object, ((ii))
Figure 581125DEST_PATH_IMAGE016
,
Figure 460219DEST_PATH_IMAGE017
),(
Figure 104827DEST_PATH_IMAGE018
,
Figure 436583DEST_PATH_IMAGE019
) The coordinates of two points of the top left corner vertex and the bottom right corner vertex of the minimum rectangle in which the target object is positioned are represented,
Figure 457628DEST_PATH_IMAGE020
the confidence level of the target object, i.e. the accuracy of the first two pieces of information, the center point of the library position O: (
Figure 925650DEST_PATH_IMAGE021
)。
It is assumed that a sensitive target object enters the storage space area, since each sensitive target object is given its position information by the recognition unit, i.e., (i)
Figure 741159DEST_PATH_IMAGE016
,
Figure 560211DEST_PATH_IMAGE017
),(
Figure 119368DEST_PATH_IMAGE018
,
Figure 35371DEST_PATH_IMAGE019
) The target object can be replaced by a rectangular frame defined by the coordinates of the two points, thus requiring less computation power, and the object is classified as
Figure 631569DEST_PATH_IMAGE015
And (4) showing.
As shown in FIG. 6, the distances from the four vertices of the recognition box q to the center O are first calculated, and due to the characteristics of the rectangle, the vertex coordinates in the upper left corner are known (
Figure 62550DEST_PATH_IMAGE016
,
Figure 35185DEST_PATH_IMAGE017
) And the coordinates of the vertex of the lower right corner: (
Figure 868012DEST_PATH_IMAGE018
,
Figure 632181DEST_PATH_IMAGE019
) Under the premise of (2), the vertex coordinates of the upper right corner can be directly obtained
Figure 550459DEST_PATH_IMAGE018
,
Figure 592364DEST_PATH_IMAGE017
) And the coordinates of the vertex of the lower left corner
Figure 14118DEST_PATH_IMAGE016
,
Figure 217698DEST_PATH_IMAGE019
) Then, the four points are calculated to the point O (O:)
Figure 357692DEST_PATH_IMAGE021
) Euclidean distance of (a):
by using
Figure 937709DEST_PATH_IMAGE022
Representing the top left corner vertex of rectangle q
Figure 479549DEST_PATH_IMAGE023
The distance to the point of the oxygen atom,
Figure 181925DEST_PATH_IMAGE024
representing the top right vertex of rectangle q
Figure 481320DEST_PATH_IMAGE025
Distance to O;
by using
Figure 599448DEST_PATH_IMAGE026
The vertex of the lower left corner of the representation rectangle q
Figure 995795DEST_PATH_IMAGE027
The distance to the point of the oxygen atom,
Figure 541177DEST_PATH_IMAGE028
represents the lower right corner vertex of rectangle q
Figure 655763DEST_PATH_IMAGE029
Distance to O;
it is possible to obtain:
Figure 312004DEST_PATH_IMAGE030
to pair
Figure 562856DEST_PATH_IMAGE022
Figure 535930DEST_PATH_IMAGE024
Figure 137812DEST_PATH_IMAGE026
Figure 597744DEST_PATH_IMAGE028
Sorting from small to large, assuming that the sorted result is:
Figure 968682DEST_PATH_IMAGE031
<
Figure 183763DEST_PATH_IMAGE032
<
Figure 882729DEST_PATH_IMAGE022
<
Figure 270985DEST_PATH_IMAGE024
to explain it
Figure 371796DEST_PATH_IMAGE027
Is the point closest to the point O, the final calculated point
Figure 554515DEST_PATH_IMAGE027
Within which bin field.
To calculate the point
Figure 271936DEST_PATH_IMAGE027
The method needs to be divided into two steps according to which reservoir area to fall into:
1. calculate the directional position of the point: calculating which region of the 12 directional regions the point falls in (i.e. 12 regions into which 12 lines from the point O are divided);
2. calculate the distance position of the point: calculate in which ring the point falls (i.e., a layer by layer ring-shaped region formed by diffusion out of the point O)
The specific calculation method is as follows:
in the scheme, m-n represents the number of a library position, m represents a direction position, 360 degrees are equally divided by 12 equal division areas, each direction area is 30 degrees, n represents a distance position, and is represented by a layer-by-layer ring from a center point O outwards, and the total number of the layers is 5.
1. Calculate the directional position of the point:
if a certain point M on the image has a rectangular coordinate system with the point O as the origin, and β is the included angle between the straight line OM and the positive direction of the X axis, then β is equal to:
Figure 198303DEST_PATH_IMAGE033
relation of each directional region m to β:
Figure 153621DEST_PATH_IMAGE035
because the four vertices of the recognition frame q are coordinates in the image coordinate system, the coordinates in the above formula for calculating the angle are at the point O: (c:)
Figure 507242DEST_PATH_IMAGE021
) In a rectangular coordinate system with the origin, so points are calculated
Figure 446379DEST_PATH_IMAGE027
The value of beta is transformed by a coordinate system:
dot
Figure 176438DEST_PATH_IMAGE027
At the point O: (
Figure 314158DEST_PATH_IMAGE021
) Coordinates in a rectangular coordinate system as the origin are
Figure 979626DEST_PATH_IMAGE027
((
Figure 265114DEST_PATH_IMAGE016
-
Figure 405720DEST_PATH_IMAGE036
),(
Figure 460264DEST_PATH_IMAGE037
)),
Beta is obtained:
Figure 296633DEST_PATH_IMAGE038
then, the value of m is calculated from the β value by equation 6.
2. Calculate the distance position of the point:
as mentioned above, the coordinates of 4 points in the library area can be obtained by drawing one library area every time: (
Figure 803838DEST_PATH_IMAGE007
,
Figure 751065DEST_PATH_IMAGE008
),(
Figure 191274DEST_PATH_IMAGE009
,
Figure 932965DEST_PATH_IMAGE010
),(
Figure 193045DEST_PATH_IMAGE011
,
Figure 6280DEST_PATH_IMAGE012
),(
Figure 910782DEST_PATH_IMAGE013
,
Figure 948008DEST_PATH_IMAGE014
) Is shown by (A), (B) and
Figure 836330DEST_PATH_IMAGE007
,
Figure 249994DEST_PATH_IMAGE008
),(
Figure 9002DEST_PATH_IMAGE009
,
Figure 685971DEST_PATH_IMAGE010
) Denotes a point of departure O: (
Figure 655064DEST_PATH_IMAGE021
) Two distant points of (A), (B)
Figure 482206DEST_PATH_IMAGE011
,
Figure 485934DEST_PATH_IMAGE012
),(
Figure 743259DEST_PATH_IMAGE013
,
Figure 465227DEST_PATH_IMAGE014
) Denotes a point of departure O: (
Figure 96060DEST_PATH_IMAGE021
) Two points with a short distance can obtain the relation between the distance L between any point M and point O and n in the bin number (because n is the same, the distance between each direction area M and O is the same, for the convenience of calculation, a square with M =3 is usedReplacement into a region):
Figure 157557DEST_PATH_IMAGE039
points have been calculated as described above
Figure 645170DEST_PATH_IMAGE040
And a point O (
Figure 588855DEST_PATH_IMAGE041
) Is at a distance of
Figure 757799DEST_PATH_IMAGE042
The value of n is then calculated according to equation 8.
At this point, the values of m and n are calculated respectively, and the number m-n of the storage area where the identification frame q is located is also determined, so that the distance corresponding to the point in the real world and the direction corresponding to the image coordinate system can be found through the number, and then the identification type of the sensitive target is combined, so that reliable spatial information can be formed, the target type needing attention and the warning of the distance and the direction corresponding to the target type can be provided for the blind, the blind can make accurate judgment, and the safety of the blind is improved.
On the other hand, as shown in fig. 7, corresponding to the above intelligent blind guiding method, the present invention further provides an intelligent blind guiding device, which includes:
the storage unit is used for storing the program of any one of the intelligent blind guiding method steps, so that the control unit, the storage location management unit, the identification unit, the processing unit and the information output unit can be called and executed timely;
wherein the control unit is configured to coordinate:
the panoramic camera is used for acquiring an environmental image frame;
the storage position management unit is used for establishing a corresponding storage position layout according to the environment image frame and adjusting the storage position layout to align the orientation;
the identification unit is used for identifying the type of the sensitive target in the environmental image frame and acquiring the coordinate of the sensitive target in an image coordinate system;
the processing unit is used for judging the area of the sensitive target in the library position layout so as to calculate the spatial azimuth information of the sensitive target;
and the information output unit, such as a loudspeaker and a display, is used for showing the type of the sensitive object and the information of the spatial orientation of the sensitive object.
Further, the step of establishing the library position layout by the library position management unit includes: establishing a gradually outward diffused reservoir ring in a view finding range by taking the panoramic camera as a center; dividing the library position rings by sectors with preset angles to establish each library position area on each library position ring; and recording coordinates of the corner points of each library position area under an image coordinate system, and establishing numbers for each library position area.
Further, the step of adjusting the library position layout by the library position management unit to align the orientation includes: and gradually adjusting the radius of each library position ring until the radius is aligned with the position of the real world, then establishing a mapping relation between each library position area and the distance in the corresponding real world, and simultaneously establishing a mapping relation between each library position area and the direction in the corresponding image coordinate system.
Further, the step of acquiring the coordinates of the sensitive target in the image coordinate system by the identification unit includes: the identification unit establishes an enclosure frame for the sensitive target and records coordinates of each corner point of the enclosure frame under an image coordinate system.
Further, the step of the processing unit determining the area in the library position layout where the sensitive target is located to calculate the spatial orientation where the sensitive target is located includes: after the processing unit calculates the Euclidean distance from each corner point of the sensitive target enclosure frame to the library position layout center; and screening out the corner points closest to the center of the library position layout, and calculating the library position area where the corner points are located so as to obtain the distance of the corner points in the corresponding real world and the direction of the corner points in the corresponding image coordinate system.
In summary, the intelligent blind guiding method and blind guiding device provided by the invention can provide real environmental road conditions around the blind person to a certain extent, and help the user to know the position information of the sensitive targets in the environment, such as where the blind road is, the distance between the vehicle and the position of the vehicle, the position of the barrier relative to the blind person, the position of the pedestrian relative to the blind person, and the like, so that the user can know the types of the nearby sensitive targets, the positions and approximate distances of the sensitive targets, and the user can be helped to know the environment of the walking road section as much as normal people. On the other hand, because the scheme does not depend on the blind guiding scheme of the traditional satellite positioning technology, and simultaneously, the information for guiding blind is richer, the method is convenient for a user to effectively deal with the blind according to the information, and simultaneously, the safety and the reliability of the blind guiding are higher.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof, and any modification, equivalent replacement, or improvement made within the spirit and principle of the invention should be included in the protection scope of the invention.
It will be appreciated by those skilled in the art that, in addition to implementing the system, apparatus and various modules thereof provided by the present invention in the form of pure computer readable program code, the same procedures may be implemented entirely by logically programming method steps such that the system, apparatus and various modules thereof provided by the present invention are implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
In addition, all or part of the steps of the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention can be made, and the embodiments of the present invention should also be regarded as the disclosure of the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (10)

1. An intelligent blind guiding method is characterized by comprising the following steps:
step S100, correspondingly establishing a library position layout for the collected environment image frames, and carrying out azimuth alignment on the library position layout;
step S200, identifying a sensitive target in an environment image frame, and acquiring a coordinate of the sensitive target in an image coordinate system;
step S300 determines the area in the library layout where the sensitive target is located, so as to calculate the orientation of the space where the sensitive target is located.
2. The intelligent blind guiding method according to claim 1, wherein the library location layout establishing step comprises:
step S110, establishing a gradually outward diffused library bit ring by taking a camera as a center; dividing the library position rings by sectors with preset angles to establish each library position area on each library position ring;
step S120, recording coordinates of the image coordinate system where the corner points of each library position area are located, and establishing numbers for each library position area.
3. The intelligent blind guiding method according to claim 2, wherein the step of aligning the library site layout comprises:
step S130, adjusting the radius of each library position ring until the radius is aligned with the position of the real world;
step S140, establishing a mapping relation between each library location area and the distance in the corresponding real world;
step S150 establishes a mapping relationship between each bin bit region and the direction in the corresponding image coordinate system.
4. The intelligent blind guiding method according to claim 3, wherein the step of obtaining the coordinates of the sensitive target in the image coordinate system comprises:
step S210 extracts coordinates of the recognition frame from the recognition information of the sensitive object to calculate coordinates of each corner point of the recognition frame in the image coordinate system.
5. The intelligent blind guiding method according to claim 4, wherein the step of judging the area of the sensitive target in the library position layout to calculate the spatial orientation of the sensitive target comprises:
step S310, calculating Euclidean distances from each corner point of the sensitive target enclosure frame to the library position layout center;
step S320 screens out the corner points nearest to the center of the library layout, and calculates the library region where the corner points are located, so as to obtain the distance in the corresponding real world and the direction in the corresponding image coordinate system.
6. An intelligent blind guiding device, comprising:
a storage unit for storing the program of the intelligent blind guiding method steps according to any one of claims 1 to 5, so that the control unit, the storage location management unit, the identification unit, the processing unit, the information output unit can be timely called and executed;
wherein the control unit is configured to coordinate:
the panoramic camera is used for acquiring an environmental image frame;
the storage position management unit is used for establishing a corresponding storage position layout according to the environment image frame and adjusting the storage position layout to align the orientation;
the identification unit is used for identifying the type of the sensitive target in the environmental image frame and acquiring the coordinate of the sensitive target in an image coordinate system;
the processing unit is used for judging the area of the sensitive target in the library position layout so as to calculate the spatial azimuth information of the sensitive target;
and the information output unit is used for showing the type of the sensitive target and the information of the spatial orientation where the sensitive target is located.
7. The intelligent blind guiding device according to claim 6, wherein the step of the library location management unit establishing a library location layout comprises: establishing a gradually outward diffused reservoir ring in a view finding range by taking the panoramic camera as a center; dividing the library position rings by sectors with preset angles to establish each library position area on each library position ring; and recording coordinates of the corner points of each library position area under an image coordinate system, and establishing numbers for each library position area.
8. The intelligent blind guiding apparatus of claim 7, wherein the library position management unit adjusts the library position layout for orientation alignment, and comprises: and gradually adjusting the radius of each library position ring until the radius is aligned with the position of the real world, then establishing a mapping relation between each library position area and the distance in the corresponding real world, and simultaneously establishing a mapping relation between each library position area and the direction in the corresponding image coordinate system.
9. The intelligent blind guiding device of claim 8, wherein the step of acquiring the coordinates of the sensitive target in the image coordinate system by the identification unit comprises: the identification unit establishes an enclosure frame for the sensitive target and records coordinates of each corner point of the enclosure frame under an image coordinate system.
10. The intelligent blind guiding device according to claim 9, wherein the processing unit determines an area in the library layout where the sensitive target is located, so as to calculate the spatial orientation where the sensitive target is located, and the processing unit includes:
after the processing unit calculates the Euclidean distance from each corner point of the sensitive target enclosure frame to the library position layout center; and screening out the corner points nearest to the center of the library position layout, and calculating the library position area where the corner points are located so as to obtain the distance in the corresponding real world and the direction in the corresponding image coordinate system.
CN202211141077.1A 2022-09-20 2022-09-20 Intelligent blind guiding method and blind guiding equipment Active CN115218918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211141077.1A CN115218918B (en) 2022-09-20 2022-09-20 Intelligent blind guiding method and blind guiding equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211141077.1A CN115218918B (en) 2022-09-20 2022-09-20 Intelligent blind guiding method and blind guiding equipment

Publications (2)

Publication Number Publication Date
CN115218918A true CN115218918A (en) 2022-10-21
CN115218918B CN115218918B (en) 2022-12-27

Family

ID=83617378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211141077.1A Active CN115218918B (en) 2022-09-20 2022-09-20 Intelligent blind guiding method and blind guiding equipment

Country Status (1)

Country Link
CN (1) CN115218918B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1936477A1 (en) * 2005-09-27 2008-06-25 Tamura Corporation Position information detection device, position information detection method, and position information detection program
CN102902271A (en) * 2012-10-23 2013-01-30 上海大学 Binocular vision-based robot target identifying and gripping system and method
CN102973395A (en) * 2012-11-30 2013-03-20 中国舰船研究设计中心 Multifunctional intelligent blind guiding method, processor and multifunctional intelligent blind guiding device
CN104574386A (en) * 2014-12-26 2015-04-29 速感科技(北京)有限公司 Indoor positioning method based on three-dimensional environment model matching
CN107402018A (en) * 2017-09-21 2017-11-28 北京航空航天大学 A kind of apparatus for guiding blind combinatorial path planing method based on successive frame
CN110118973A (en) * 2019-05-27 2019-08-13 杭州亚美利嘉科技有限公司 Warehouse Intellisense recognition methods, device and electronic equipment
CN110664593A (en) * 2019-08-21 2020-01-10 重庆邮电大学 Hololens-based blind navigation system and method
CN110837814A (en) * 2019-11-12 2020-02-25 深圳创维数字技术有限公司 Vehicle navigation method, device and computer readable storage medium
US20200064141A1 (en) * 2018-08-24 2020-02-27 Ford Global Technologies, Llc Navigational aid for the visually impaired
CN111743740A (en) * 2020-06-30 2020-10-09 平安国际智慧城市科技股份有限公司 Blind guiding method and device, blind guiding equipment and storage medium
CN113624236A (en) * 2021-08-06 2021-11-09 西安电子科技大学 Mobile device-based navigation system and navigation method for blind people
CN113963254A (en) * 2021-08-30 2022-01-21 武汉众智鸿图科技有限公司 Vehicle-mounted intelligent inspection method and system integrating target identification
WO2022078513A1 (en) * 2020-10-16 2022-04-21 北京猎户星空科技有限公司 Positioning method and apparatus, self-moving device, and storage medium
WO2022151560A1 (en) * 2021-01-14 2022-07-21 北京工业大学 Smart cane for blind people based on mobile wearable computing and fast deep neural network

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1936477A1 (en) * 2005-09-27 2008-06-25 Tamura Corporation Position information detection device, position information detection method, and position information detection program
CN102902271A (en) * 2012-10-23 2013-01-30 上海大学 Binocular vision-based robot target identifying and gripping system and method
CN102973395A (en) * 2012-11-30 2013-03-20 中国舰船研究设计中心 Multifunctional intelligent blind guiding method, processor and multifunctional intelligent blind guiding device
CN104574386A (en) * 2014-12-26 2015-04-29 速感科技(北京)有限公司 Indoor positioning method based on three-dimensional environment model matching
CN107402018A (en) * 2017-09-21 2017-11-28 北京航空航天大学 A kind of apparatus for guiding blind combinatorial path planing method based on successive frame
US20200064141A1 (en) * 2018-08-24 2020-02-27 Ford Global Technologies, Llc Navigational aid for the visually impaired
CN110118973A (en) * 2019-05-27 2019-08-13 杭州亚美利嘉科技有限公司 Warehouse Intellisense recognition methods, device and electronic equipment
CN110664593A (en) * 2019-08-21 2020-01-10 重庆邮电大学 Hololens-based blind navigation system and method
CN110837814A (en) * 2019-11-12 2020-02-25 深圳创维数字技术有限公司 Vehicle navigation method, device and computer readable storage medium
CN111743740A (en) * 2020-06-30 2020-10-09 平安国际智慧城市科技股份有限公司 Blind guiding method and device, blind guiding equipment and storage medium
WO2022078513A1 (en) * 2020-10-16 2022-04-21 北京猎户星空科技有限公司 Positioning method and apparatus, self-moving device, and storage medium
WO2022151560A1 (en) * 2021-01-14 2022-07-21 北京工业大学 Smart cane for blind people based on mobile wearable computing and fast deep neural network
CN113624236A (en) * 2021-08-06 2021-11-09 西安电子科技大学 Mobile device-based navigation system and navigation method for blind people
CN113963254A (en) * 2021-08-30 2022-01-21 武汉众智鸿图科技有限公司 Vehicle-mounted intelligent inspection method and system integrating target identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
K. IWATSUKA 等: "Development of a guide dog system for the blind people with character recognition ability", 《PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, 2004. ICPR 2004.》 *
张海民 等: "深度学习下盲人避撞路径导航方法研究", 《南京信息工程大学学报(自然科学版)》 *

Also Published As

Publication number Publication date
CN115218918B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
US11482008B2 (en) Directing board repositioning during sensor calibration for autonomous vehicles
US10670416B2 (en) Traffic sign feature creation for high definition maps used for navigating autonomous vehicles
CN110322702A (en) A kind of Vehicular intelligent speed-measuring method based on Binocular Stereo Vision System
CN110285793A (en) A kind of Vehicular intelligent survey track approach based on Binocular Stereo Vision System
US8665263B2 (en) Aerial image generating apparatus, aerial image generating method, and storage medium having aerial image generating program stored therein
CN112328730B (en) Map data updating method, related device, equipment and storage medium
CN103700261A (en) Video-based road traffic flow feature parameter monitoring and traffic comprehensive information service system
CN112674998B (en) Blind person traffic intersection assisting method based on rapid deep neural network and mobile intelligent device
AU2021255130B2 (en) Artificial intelligence and computer vision powered driving-performance assessment
AU2018410435B2 (en) Port area monitoring method and system, and central control system
Liu et al. Deep-learning and depth-map based approach for detection and 3-D localization of small traffic signs
JP2007004256A (en) Image processing apparatus and image processing method
CN114252884A (en) Method and device for positioning and monitoring roadside radar, computer equipment and storage medium
CN116052124A (en) Multi-camera generation local map template understanding enhanced target detection method and system
Tarko et al. Tscan: Stationary lidar for traffic and safety studies—object detection and tracking
CN114252883B (en) Target detection method, apparatus, computer device and medium
CN115218918B (en) Intelligent blind guiding method and blind guiding equipment
CN114252859A (en) Target area determination method and device, computer equipment and storage medium
CN112818866A (en) Vehicle positioning method and device and electronic equipment
CN114252868A (en) Laser radar calibration method and device, computer equipment and storage medium
CN114627398A (en) Unmanned aerial vehicle positioning method and system based on screen optical communication
CN114255264B (en) Multi-base-station registration method and device, computer equipment and storage medium
KR102613590B1 (en) Method of determining the location of a drone using 3D terrain location information and a drone thereof
KR20230065730A (en) Method of determining the location of a mobile device using 3D facility location information and apparatus thereof
KR20230065731A (en) Method of updating 3D facility location information and apparatus thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant