CN115218918B - Intelligent blind guiding method and blind guiding equipment - Google Patents
Intelligent blind guiding method and blind guiding equipment Download PDFInfo
- Publication number
- CN115218918B CN115218918B CN202211141077.1A CN202211141077A CN115218918B CN 115218918 B CN115218918 B CN 115218918B CN 202211141077 A CN202211141077 A CN 202211141077A CN 115218918 B CN115218918 B CN 115218918B
- Authority
- CN
- China
- Prior art keywords
- library
- library position
- sensitive target
- layout
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3446—Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H3/00—Appliances for aiding patients or disabled persons to walk about
- A61H3/06—Walking aids for blind persons
- A61H3/061—Walking aids for blind persons with electronic detecting or guiding means
Landscapes
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Rehabilitation Therapy (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Physical Education & Sports Medicine (AREA)
- Pain & Pain Management (AREA)
- Epidemiology (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an intelligent blind guiding method and blind guiding equipment, wherein the method comprises the following steps: step S100, correspondingly establishing a library position layout for the collected environment image frames, and carrying out azimuth alignment on the library position layout; step S200, identifying a sensitive target in an environment image frame, and acquiring a coordinate of the sensitive target in an image coordinate system; step S300 determines the area in the library layout where the sensitive target is located, so as to calculate the orientation of the space where the sensitive target is located. Therefore, real environmental information around the blind person is provided to a certain extent, the user is helped to know the direction information of the sensitive target in the environment, the user can conveniently and effectively deal with the information, and the safety of the user is improved.
Description
Technical Field
The invention relates to a machine vision technology, in particular to an intelligent blind guiding method and blind guiding equipment suitable for target direction identification.
Background
The problem of difficulty in traveling of the blind is always a topic which is widely concerned by society, and the difficulty in guiding the blind caused by complex environment is one of the main reasons. For this reason, the applicant in this field has proposed many blind guiding schemes, such as "positioning method, device, electronic device and storage medium of intelligent blind guiding stick" (patent application No. 2021101699225. X), which proposes to receive satellite data transmitted by intelligent blind guiding stick; receiving differential data sent by a base station; and carrying out real-time dynamic carrier phase difference processing based on the satellite data and the difference data to obtain the positioning position of the intelligent blind guiding stick so as to eliminate public errors influencing the positioning accuracy, realize high-accuracy positioning and further realize high-accuracy blind guiding navigation.
However, such technologies have problems in that the walking route of the user is determined according to the satellite positioning, and if there is a problem in signal transmission with the satellite, such as in a poor signal reception environment or an indoor environment, inconvenience is brought to the user, or even a safety problem exists.
Therefore, there is a need in the art for a blind guiding scheme to provide the blind with information that can understand the surrounding environment,
disclosure of Invention
Therefore, the main objective of the present invention is to provide an intelligent blind guiding method and blind guiding device, so as to provide the blind with information that can know the surrounding environment of the blind.
In order to achieve the above object, according to one aspect of the present invention, there is provided an intelligent blind guiding method, comprising the steps of:
step S100, correspondingly establishing a library position layout for the collected environment image frames, and carrying out azimuth alignment on the library position layout;
step S200, identifying a sensitive target in an environment image frame, and acquiring a coordinate of the sensitive target in an image coordinate system;
step S300 determines the area in the library layout where the sensitive target is located, so as to calculate the orientation of the space where the sensitive target is located.
In a possible preferred embodiment, the library location layout establishing step includes:
step S110, establishing a gradually outward diffused library bit ring by taking a camera as a center; dividing the library position rings by sectors with preset angles to establish each library position area on each library position ring;
step S120, recording coordinates of the image coordinate system where the corner points of each library position area are located, and establishing numbers for each library position area.
In a possible preferred embodiment, the step of aligning the library bit layout comprises:
step S130, adjusting the radius of each library position ring until the radius is aligned with the position of the real world;
step S140, establishing a mapping relation between each library location area and the distance in the corresponding real world;
step S150 establishes a mapping relationship between each bin bit region and the direction in the corresponding image coordinate system.
In a possible preferred embodiment, the step of acquiring the coordinates of the sensitive object in the image coordinate system includes:
step S210 extracts coordinates of the recognition box from the recognition information of the sensitive object to calculate coordinates of the recognition box under the image coordinate system where each corner point is located.
In a possible preferred embodiment, the step of determining the area in the library location layout where the sensitive object is located to calculate the spatial orientation where the sensitive object is located includes:
step S310, calculating Euclidean distances from each corner point of the sensitive target enclosing frame to the library position layout center;
step S320 screens out the corner points nearest to the center of the library layout, and calculates the library region where the corner points are located, so as to obtain the distance in the corresponding real world and the direction in the corresponding image coordinate system.
In order to achieve the above object, corresponding to the above intelligent blind guiding method, in another aspect of the present invention, an intelligent blind guiding device is further provided, which includes:
the storage unit is used for storing the program of any one of the intelligent blind guiding method steps, so that the control unit, the storage location management unit, the identification unit, the processing unit and the information output unit can be called and executed timely;
wherein the control unit is configured to coordinate:
the panoramic camera is used for acquiring an environmental image frame;
the storage position management unit is used for establishing a corresponding storage position layout according to the environment image frame and adjusting the storage position layout to align the orientation;
the identification unit is used for identifying the type of the sensitive target in the environmental image frame and acquiring the coordinate of the sensitive target in an image coordinate system;
the processing unit is used for judging the area in the library position layout of the sensitive target so as to calculate the spatial direction information of the sensitive target;
and the information output unit is used for showing the type of the sensitive target and the information of the spatial orientation where the sensitive target is located.
In a possible preferred embodiment, the step of establishing a library bit layout by the library bit management unit comprises: establishing a gradually outward diffused reservoir ring in a view finding range by taking the panoramic camera as a center; dividing the library position rings by sectors with preset angles to establish each library position area on each library position ring; and recording coordinates of the corner points of each library position area under an image coordinate system, and establishing numbers for each library position area.
In a possible preferred embodiment, the step of adjusting the library position layout for orientation alignment by the library position management unit includes: and gradually adjusting the radius of each library position ring until the radius is aligned with the position of the real world, then establishing a mapping relation between each library position area and the distance in the corresponding real world, and simultaneously establishing a mapping relation between each library position area and the direction in the corresponding image coordinate system.
In a possible preferred embodiment, the step of acquiring the coordinates of the sensitive object in the image coordinate system by the identification unit includes: the identification unit establishes an enclosure frame for the sensitive target and records coordinates of each corner point of the enclosure frame under an image coordinate system.
In a possible preferred embodiment, the step of determining, by the processing unit, an area in the library location layout where the sensitive object is located to calculate the spatial orientation where the sensitive object is located includes: after calculating the Euclidean distance from each corner point of the sensitive target enclosing frame to the library position layout center by the processing unit; and screening out the corner points closest to the center of the library position layout, and calculating the library position area where the corner points are located so as to obtain the distance of the corner points in the corresponding real world and the direction of the corner points in the corresponding image coordinate system.
The intelligent blind guiding method and the blind guiding device provided by the invention can provide real environmental road conditions around the blind person to a certain extent, and help the user to know the position information of the sensitive targets in the environment, such as where the blind road is, the position of the vehicle, the position of the barrier relative to the blind person, the position of the pedestrian relative to the blind person and the like, so that the user can know the types of the sensitive targets nearby and the positions and approximate distances of the sensitive targets, and the user is helped to know the environment of the walking road section as much as normal people. On the other hand, because the scheme does not depend on the blind guiding scheme of the traditional satellite positioning technology, and simultaneously, the information for guiding blind is richer, the method is convenient for a user to effectively deal with the blind according to the information, and simultaneously, the safety and the reliability of the blind guiding are higher.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic diagram illustrating basic steps of an intelligent blind guiding method according to the present invention;
FIG. 2 is a schematic diagram of an initial library location layout of the intelligent blind guiding method of the present invention;
FIG. 3 is a schematic diagram of an image coordinate system in the intelligent blind guiding method of the present invention;
FIG. 4 is a schematic diagram of a structure of a library location area of the intelligent blind guiding method of the present invention;
FIG. 5 is a schematic diagram of an actual library location layout structure of the intelligent blind guiding method of the present invention;
FIG. 6 is a schematic diagram illustrating an example of computing a bin location area where a sensitive target is located in the intelligent blind guiding method according to the present invention;
fig. 7 is a schematic structural diagram of an intelligent blind guiding method and blind guiding equipment according to the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following will clearly and completely describe the specific technical solution of the present invention with reference to the embodiments to help those skilled in the art to further understand the present invention. It should be apparent that the embodiments described herein are only a few embodiments of the present invention, and not all embodiments. It should be noted that the embodiments and features of the embodiments in this application may be combined with each other without departing from the spirit and conflict of the present disclosure, as will be apparent to those of ordinary skill in the art. All other embodiments based on the embodiments of the present invention, which can be obtained by a person of ordinary skill in the art without any creative effort, shall fall within the disclosure and the protection scope of the present invention.
Furthermore, the terms "first," "second," "S1," "S2," and the like in the description and in the claims of the invention and in the drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those described herein. Also, the terms "including" and "having," as well as any variations thereof, in the present invention are intended to cover non-exclusive inclusions. Unless expressly stated or limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in this case can be understood by those skilled in the art in combination with the prior art according to the specific situation.
The blind person needs to be guided because of the lack of information of the surrounding environment, so that judgment basis cannot be provided for the action of the blind person, and therefore the blind person guiding method is intended to help the blind person to identify and obtain the space information of a specific sensitive target object in the environment, so that judgment basis is provided for the action of the blind person.
Therefore, referring to fig. 1 to fig. 6, the intelligent blind guiding method provided by the present invention includes the following steps:
step S100 is to establish a library position layout for the collected environment image frames correspondingly, and align the library position layout in azimuth.
Specifically, the steps of establishing the library position layout and performing the orientation alignment include:
step S110, establishing a gradually outward diffused library bit ring by taking a camera as a center; the library position rings are divided into sectors of preset angles to establish each library position area on each library position ring.
Step S120, recording coordinates of the image coordinate system where the corner points of each library location area are located, and establishing numbers for each library location area.
Step S130 adjusts the radius of each bin ring until it is aligned with the real-world location.
Step S140 establishes a mapping relationship between each library location area and a distance in the corresponding real world.
Step S150 establishes a mapping relationship between each bin region and the direction in the corresponding image coordinate system.
Specifically, for example, as shown in fig. 2, it is preferable to adopt a panoramic camera in the present application, and to capture an environmental image within 360 ° around a user as an information identification area. In the present invention, although the library layout is fixed within the screen presented by the panoramic camera, the camera is moving and the screen is changed, so that the library layout is not changed with respect to the position in the image coordinate system but is changed with respect to the real world coordinate system.
The computational power required by the whole method in the scheme can be operated on nvidia jetson, the equipment is equivalent to a computer, and the equipment is characterized by very small volume and can provide unusual cpu and gpu computational power. Therefore, it can decode the video stream of the camera to obtain an image of a frame, and the image coordinate system is as shown in fig. 3, and each image has a coordinate system with the top of the top left corner of the image as the origin to represent the position of each pixel point, and right is a positive x-axis and down is a positive y-axis.
Further, for example, two numbers symmetrical to each other are connected by using the clock pattern as a basic template and the center of the clock dial as a circle center, such as 12 and 6, and 9 and 3, 6 pairs of numbers are connected.
Then, in order to better distinguish the directions, each line can be rotated clockwise by 15 degrees, and at the moment, the ground of the surrounding environment is divided into 12 fan-shaped equal parts by taking the user as the center of a circle, which represent 12 directions;
in addition, a distance factor needs to be added, and similarly, for example, a circle is drawn by taking the panoramic camera as a center and taking the distance of one meter, two meters, three meters, four meters, five meters and six meters as a radius, in this case, besides the circle with the radius of 1 meter where the user is located, 5 circles with the interval distance of 1 meter are obtained, and different circles indicate that the distance from the user is different.
And finally, drawing 5 circles by using the cv2.Circle of the opencv open source visual library with the panoramic camera as the center, and drawing corresponding 12 lines by using cv2.Line to form an initial library position as shown in FIG. 2, wherein the initial library position is not a real library position layout, but provides a template for subsequently establishing a formal library position layout.
And then, after the positioning panoramic camera determines the position of the panoramic camera, positioning needs to be carried out by combining the initial library position and the actual distance. As shown in fig. 2, in addition to the smallest circle where the user is located, 5 circles are distributed in sequence, so that the circumference of the smallest circle is adjusted to be located 1 meter away from the user in the real world, the outer ring of the first circle is located two meters away from the user in the real world, and so on until all the circles are adjusted to be aligned with the real world distance.
It should be noted that the distance of each ring can be adjusted according to the situation of the user, and the above examples in this case are only used to illustrate the implementation principle of the technology, and do not limit the number of the rings, so that those skilled in the art can adjust the number of the rings according to the actual situation to increase or decrease the requirement of the detection accuracy without departing from the technical concept of the present invention.
Whereby the approximate distance to the user can be determined based on which ring the object falls in, it follows that the distance can be located more accurately if the distance between each ring is smaller.
Further, the initial library location is converted into a library location layout, as shown in fig. 4, the idea of the present invention is to draw an approximate trapezoid in each half-sector area according to four corner points thereof, the trapezoid being a formal library location area, taking half-sector 12-3 as an example, wherein in fig. 4, the half-sector is an original shape, and the trapezoid is a final library location area shape, such as an isosceles trapezoid.
In this form, such an isosceles trapezoid is drawn in each small sector, for example, using opencv on the image, a callback function mouse used by setmousecall is defined, which, when a mouse left click event (cv2. Event _ LBUTTONDOWN) is captured, obtains the coordinates of the clicked pixel point, creates a solid circle with a radius of 1, i.e., draws a solid point on the click, and displays and records the coordinates of the solid point in the graph coordinate system where the solid point is located.
Thus, each isosceles trapezoid is drawn, the coordinates of each point are recorded while the trapezoid is displayed, and each trapezoid is given a number, such as M-n, where M represents the direction and n represents the distance, such as 12-3, i.e., 12 points, from the user by 3 meters, so as to identify the bin region, and finally form the bin layout shown in fig. 5.
Thus, each bin field has a number and also coordinates of four points, i.e. position information representing it.
Furthermore, in a use scene, the panoramic camera can be installed on equipment such as a blind guiding stick or a blind guiding robot of a user, so that after library position layout is initialized, library position areas are generated on a camera picture by taking the user as a circle center, coordinate values of four vertexes of the library position areas are stored on edge equipment connected with the camera, the edge equipment is equivalent to a micro server (computer), camera images can be displayed, and library position information can also be stored in a memory or hardware, then library position layout azimuth alignment is carried out, namely, the radius of each ring of an upper image is adjusted according to real ground distance information, the distance of a real world is aligned with the distance of the library position areas, and the positions of the library position areas on the aligned images are stored on the edge equipment, so that a library position layout image which can move along with the user and can represent the real distance information is generated;
step S200 identifies a sensitive target in the environmental image frame, and obtains a coordinate of the sensitive target in an image coordinate system.
The step of acquiring the coordinates of the sensitive target in the image coordinate system comprises the following steps:
step S210 extracts coordinates of the recognition box from the recognition information of the sensitive object to calculate coordinates of the recognition box under the image coordinate system where each corner point is located.
Specifically, after the position alignment is completed in the library position layout, a target recognition link is allowed to be performed, at this time, an image acquired by the panoramic camera is sent to a recognition unit, and after target detection, the type of a sensitive target object is obtained, meanwhile, in the existing target detection technology, a yolo technology such as yolo v5 can be generally adopted for recognition, so that after a sensitive target is recognized, the technology can automatically mark recognition frames (a GT frame and a prediction frame) around the recognized sensitive target, and the scheme utilizes recognition information carried by the recognized sensitive target per se,(,),(,),](i.e., theIndicating the category of the sensitive target object: (,),(,) The coordinates of two points of the top left corner vertex and the bottom right corner vertex of the minimum recognition frame of the target object are represented,representing the confidence of the sensitive target object) and then the coordinates of the image coordinate system where the 4 corner points of the recognition frame are located can be calculated.
In addition, since the present application exemplifies the use of yolov5 for object detection, which is a prior art, how to perform the deep learning and the object detection process can be performed with reference to the prior art, and the present application only exemplifies that the following is preferable: the blind road, traffic lights, ponding, pedestrian, dog, cat, car, bus, step, bicycle, trolley-bus, freight train, zebra crossing etc. carry out corresponding study training and discernment as sensitive target under the scene of the basic characteristic screening of environment under the daily trip scene to provide necessary environmental information for the blind person's trip.
Step S300 determines the area in the library layout where the sensitive target is located, so as to calculate the orientation of the space where the sensitive target is located.
The method for judging the area in the library position layout of the sensitive target to calculate the spatial orientation of the sensitive target comprises the following steps:
step S310, calculating Euclidean distances from each corner point of the sensitive target enclosure frame to the library position layout center;
step S320 screens out the corner points nearest to the center of the library layout, and calculates the library region where the corner points are located, so as to obtain the distance in the corresponding real world and the direction in the corresponding image coordinate system.
Specifically, in this case, each bin region is drawn to generate coordinates of four vertices, and 60 half-sector bin regions can be obtained according to 12 directions and 5 annular regions, where the known condition is that coordinates of four vertices of each bin region are calculated by (1),),(,),(,),(,) Representing, and each bin field defining a number m-n, and identifying information of a group Q of sensitive target objects obtained by the unit (yolov 5), provided that Q =1,2,3 ,(,),(,),],Representing the class of the target object, ((ii)),),(,) The coordinates of two points of the vertex of the upper left corner and the vertex of the lower right corner of the minimum rectangle of the target object are represented,the confidence level of the target object, i.e. the accuracy of the first two pieces of information, the center point of the library position O: ()。
It is assumed that a sensitive object enters the storage space area, since each sensitive object is given its position information by the identification unit, i.e. ((ii)),),(,) Therefore, the target object can be replaced by a rectangular frame determined by the coordinates of the two points, so that the calculation force is less, and the type of the object is determined byAnd (4) showing.
As shown in FIG. 6, the distances from the four vertices of the recognition box q to the center O are first calculated, and due to the characteristics of the rectangle, the vertex coordinates in the upper left corner are known (,) And the coordinates of the vertex of the lower right corner: (,) On the premise of (2), the top right corner can be directly obtainedPoint coordinates (,) And the vertex coordinates of the lower left corner: (,) Then, the four points are calculated to the point O (respectively)) Euclidean distance of (a):
by usingRepresenting the top left corner vertex of rectangle qThe distance to the point of contact of the sensor,representing the top right vertex of rectangle qDistance to O;
by usingThe vertex of the lower left corner of the representation rectangle qThe distance to the point of contact of the sensor,represents the lower right corner vertex of rectangle qDistance to O;
it is possible to obtain:
for is to,,,Sorting from small to large, assuming that the sorted result is:< < <i.e. to explainIs the point closest to the point O, the final calculated pointWithin which bin field.
To calculate the pointThe storage position area where the system falls needs to be divided into two partsThe method comprises the following steps:
1. calculate the directional position of the point: calculating which of the 12 directional regions the point falls in (i.e., 12 regions into which the 12 lines from the point O are divided);
2. calculate the distance position of the point: calculate in which ring the point falls (i.e., the layer-by-layer annular region formed by the outward diffusion from the point O)
The specific calculation method is as follows:
in the scheme, m-n represents the number of a library position, m represents a direction position, 360 degrees are equally divided by 12 equal division areas, each direction area is 30 degrees, n represents a distance position, and is represented by a layer-by-layer ring from a center point O outwards, and the total number of the layers is 5.
1. Calculate the directional position of the point:
if a certain point M on the image has a rectangular coordinate system with the point O as the origin, and β is the included angle between the straight line OM and the positive direction of the X axis, then β is equal to:
relation of each directional region m to β:
because the four vertices of the recognition frame q are coordinates in the image coordinate system, the coordinates in the above formula for calculating the angle are at the point O: (c:)) In a rectangular coordinate system with the origin, so points are calculatedThe value of beta is transformed by a coordinate system:
The method can obtain the following components:
then, the value of m is calculated from the β value by equation 6.
2. Calculate the distance position of the point:
as mentioned above, the coordinates of 4 points in the library area can be obtained by drawing one library area every time: (,),(,),(,),(,) Is shown by (A), (B) and,),(,) Denotes a point of departure O: () Two distant points: (,),(,) Denotes a departure point O: () Two points with a short distance can obtain the relationship between the distance L between any point M and point O and n in the bin number (because n is the same, the distance between each direction area M and O is the same, and for convenience of calculation, the direction area M =3 is used instead):
points have been calculated as described aboveAnd a point O () Is a distance ofThe value of n is then calculated according to equation 8.
The values of m and n are calculated respectively, and the library space area number m-n where the identification frame q is located is determined, so that the distance corresponding to the point in the real world and the direction corresponding to the image coordinate system can be found through the number, and then the identification type of the sensitive target is combined, so that reliable spatial information can be formed, the target type needing attention and the warning of the distance and the direction corresponding to the target type can be provided for the blind, the blind can make accurate judgment, and the safety of the blind can be improved.
On the other hand, as shown in fig. 7, corresponding to the above intelligent blind guiding method, the present invention further provides an intelligent blind guiding device, which includes:
the storage unit is used for storing the program of any one of the intelligent blind guiding method steps, so that the control unit, the storage location management unit, the identification unit, the processing unit and the information output unit can be called and executed timely;
wherein the control unit is configured to coordinate:
the panoramic camera is used for acquiring an environmental image frame;
the storage position management unit is used for establishing a corresponding storage position layout according to the environment image frame and adjusting the storage position layout to align the orientation;
the identification unit is used for identifying the type of the sensitive target in the environmental image frame and acquiring the coordinate of the sensitive target in an image coordinate system;
the processing unit is used for judging the area of the sensitive target in the library position layout so as to calculate the spatial azimuth information of the sensitive target;
and the information output unit, such as a loudspeaker and a display, is used for showing the type of the sensitive object and the information of the spatial orientation of the sensitive object.
Further, the step of establishing the library position layout by the library position management unit includes: establishing a gradually outward diffused reservoir ring in a view finding range by taking the panoramic camera as a center; dividing the library position rings by a sector of a preset angle so as to establish each library position area on each library position ring; and recording coordinates of the corner points of each library position area under an image coordinate system, and establishing numbers for each library position area.
Further, the step of adjusting the library position layout by the library position management unit to align the orientation includes: and gradually adjusting the radius of each library position ring until the radius is aligned with the position of the real world, then establishing a mapping relation between each library position area and the distance in the corresponding real world, and simultaneously establishing a mapping relation between each library position area and the direction in the corresponding image coordinate system.
Further, the step of acquiring the coordinates of the sensitive target in the image coordinate system by the identification unit includes: the identification unit establishes an enclosure frame for the sensitive target and records coordinates of each corner point of the enclosure frame under an image coordinate system.
Further, the step of the processing unit determining an area in the library position layout where the sensitive target is located to calculate the spatial orientation where the sensitive target is located includes: after calculating the Euclidean distance from each corner point of the sensitive target enclosing frame to the library position layout center by the processing unit; and screening out the corner points closest to the center of the library position layout, and calculating the library position area where the corner points are located so as to obtain the distance of the corner points in the corresponding real world and the direction of the corner points in the corresponding image coordinate system.
In summary, the intelligent blind guiding method and blind guiding device provided by the invention can provide real environmental road conditions around the blind person to a certain extent, and help the user to know the position information of the sensitive targets in the environment, such as where the blind road is, the distance between the vehicle and the position of the vehicle, the position of the barrier relative to the blind person, the position of the pedestrian relative to the blind person, and the like, so that the user can know the types of the nearby sensitive targets, the positions and approximate distances of the sensitive targets, and the user can be helped to know the environment of the walking road section as much as normal people. On the other hand, because the scheme does not depend on the blind guiding scheme of the traditional satellite positioning technology, and simultaneously, the information for guiding blind is richer, the method is convenient for a user to effectively deal with the blind according to the information, and simultaneously, the safety and the reliability of the blind guiding are higher.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof, and any modification, equivalent replacement, or improvement made within the spirit and principle of the invention should be included in the protection scope of the invention.
It will be appreciated by those skilled in the art that, in addition to implementing the system, apparatus and various modules thereof provided by the present invention in the form of pure computer readable program code, the same procedures may be implemented entirely by logically programming method steps such that the system, apparatus and various modules thereof provided by the present invention are implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
In addition, all or part of the steps of the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.
Claims (4)
1. An intelligent blind guiding method is characterized by comprising the following steps:
step S100, correspondingly establishing a library position layout for the collected environment image frames, and carrying out azimuth alignment on the library position layout, wherein the library position layout establishing step comprises the following steps: step S110, establishing a gradually outward diffused library bit ring by taking a camera as a center; dividing the library position rings by sectors with preset angles to establish each library position area on each library position ring; step S120, recording coordinates of the corner points of each library position area under an image coordinate system, and establishing numbers for each library position area;
step S200, identifying a sensitive target in an environment image frame, and acquiring a coordinate of the sensitive target under an image coordinate system, wherein the steps comprise: step S210, extracting coordinates of an identification frame from identification information of the sensitive target to calculate coordinates of each corner point of the identification frame under an image coordinate system;
step S300, determining an area in the library position layout where the sensitive target is located, so as to calculate the position of the space where the sensitive target is located, the steps include:
step S310, calculating Euclidean distances from each corner point of the sensitive target enclosing frame to the library position layout center;
step S320 screens out the corner points nearest to the center of the library layout, and calculates the library region where the corner points are located, so as to obtain the distance in the corresponding real world and the direction in the corresponding image coordinate system.
2. The intelligent blind guiding method of claim 1, wherein the step of aligning the library site layout comprises:
step S130, adjusting the radius of each library position ring until the radius is aligned with the position of the real world;
step S140, establishing a mapping relation between each library location area and the distance in the corresponding real world;
step S150 establishes a mapping relationship between each bin region and the direction in the corresponding image coordinate system.
3. An intelligent blind guiding device, comprising:
a storage unit for storing the program of the intelligent blind guiding method steps according to any one of claims 1 to 2, so that the control unit, the storage location management unit, the identification unit, the processing unit, the information output unit can be timely called and executed;
wherein the control unit is configured to coordinate:
the panoramic camera is used for acquiring an environment image frame;
a library position management unit, configured to establish a corresponding library position layout according to the environment image frame, and adjust the library position layout to perform orientation alignment, where the step of establishing the library position layout by the library position management unit includes: establishing a gradually outward diffused reservoir ring in a view finding range by taking the panoramic camera as a center; dividing the library position rings by sectors with preset angles to establish each library position area on each library position ring; recording coordinates of the corner points of each library position area under an image coordinate system, and establishing serial numbers for each library position area;
the identification unit is used for identifying the type of the sensitive target in the environmental image frame and acquiring the coordinate of the sensitive target under an image coordinate system, and comprises the following steps: the method comprises the following steps: the identification unit establishes an enclosure frame for the sensitive target and records coordinates of each corner point of the enclosure frame under an image coordinate system;
the processing unit is used for judging the area in the library position layout of the sensitive target to calculate the spatial direction information of the sensitive target, and the processing unit comprises the following steps: after the processing unit calculates the Euclidean distance from each corner point of the sensitive target enclosure frame to the library position layout center; screening out the corner points nearest to the center of the library position layout, and calculating the library position area where the corner points are located so as to obtain the distance in the corresponding real world and the direction in the corresponding image coordinate system;
and the information output unit is used for showing the type of the sensitive target and the information of the spatial orientation where the sensitive target is located.
4. The intelligent blind guiding device according to claim 3, wherein the step of adjusting the library position layout for azimuth alignment by the library position management unit comprises: and gradually adjusting the radius of each library position ring until the radius is aligned with the position of the real world, then establishing a mapping relation between each library position area and the distance in the corresponding real world, and simultaneously establishing a mapping relation between each library position area and the direction in the corresponding image coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211141077.1A CN115218918B (en) | 2022-09-20 | 2022-09-20 | Intelligent blind guiding method and blind guiding equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211141077.1A CN115218918B (en) | 2022-09-20 | 2022-09-20 | Intelligent blind guiding method and blind guiding equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115218918A CN115218918A (en) | 2022-10-21 |
CN115218918B true CN115218918B (en) | 2022-12-27 |
Family
ID=83617378
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211141077.1A Active CN115218918B (en) | 2022-09-20 | 2022-09-20 | Intelligent blind guiding method and blind guiding equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115218918B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1936477A1 (en) * | 2005-09-27 | 2008-06-25 | Tamura Corporation | Position information detection device, position information detection method, and position information detection program |
CN102902271A (en) * | 2012-10-23 | 2013-01-30 | 上海大学 | Binocular vision-based robot target identifying and gripping system and method |
CN102973395A (en) * | 2012-11-30 | 2013-03-20 | 中国舰船研究设计中心 | Multifunctional intelligent blind guiding method, processor and multifunctional intelligent blind guiding device |
CN104574386A (en) * | 2014-12-26 | 2015-04-29 | 速感科技(北京)有限公司 | Indoor positioning method based on three-dimensional environment model matching |
CN107402018A (en) * | 2017-09-21 | 2017-11-28 | 北京航空航天大学 | A kind of apparatus for guiding blind combinatorial path planing method based on successive frame |
CN110118973A (en) * | 2019-05-27 | 2019-08-13 | 杭州亚美利嘉科技有限公司 | Warehouse Intellisense recognition methods, device and electronic equipment |
CN110664593A (en) * | 2019-08-21 | 2020-01-10 | 重庆邮电大学 | Hololens-based blind navigation system and method |
CN110837814A (en) * | 2019-11-12 | 2020-02-25 | 深圳创维数字技术有限公司 | Vehicle navigation method, device and computer readable storage medium |
CN111743740A (en) * | 2020-06-30 | 2020-10-09 | 平安国际智慧城市科技股份有限公司 | Blind guiding method and device, blind guiding equipment and storage medium |
CN113624236A (en) * | 2021-08-06 | 2021-11-09 | 西安电子科技大学 | Mobile device-based navigation system and navigation method for blind people |
CN113963254A (en) * | 2021-08-30 | 2022-01-21 | 武汉众智鸿图科技有限公司 | Vehicle-mounted intelligent inspection method and system integrating target identification |
WO2022078513A1 (en) * | 2020-10-16 | 2022-04-21 | 北京猎户星空科技有限公司 | Positioning method and apparatus, self-moving device, and storage medium |
WO2022151560A1 (en) * | 2021-01-14 | 2022-07-21 | 北京工业大学 | Smart cane for blind people based on mobile wearable computing and fast deep neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10809079B2 (en) * | 2018-08-24 | 2020-10-20 | Ford Global Technologies, Llc | Navigational aid for the visually impaired |
-
2022
- 2022-09-20 CN CN202211141077.1A patent/CN115218918B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1936477A1 (en) * | 2005-09-27 | 2008-06-25 | Tamura Corporation | Position information detection device, position information detection method, and position information detection program |
CN102902271A (en) * | 2012-10-23 | 2013-01-30 | 上海大学 | Binocular vision-based robot target identifying and gripping system and method |
CN102973395A (en) * | 2012-11-30 | 2013-03-20 | 中国舰船研究设计中心 | Multifunctional intelligent blind guiding method, processor and multifunctional intelligent blind guiding device |
CN104574386A (en) * | 2014-12-26 | 2015-04-29 | 速感科技(北京)有限公司 | Indoor positioning method based on three-dimensional environment model matching |
CN107402018A (en) * | 2017-09-21 | 2017-11-28 | 北京航空航天大学 | A kind of apparatus for guiding blind combinatorial path planing method based on successive frame |
CN110118973A (en) * | 2019-05-27 | 2019-08-13 | 杭州亚美利嘉科技有限公司 | Warehouse Intellisense recognition methods, device and electronic equipment |
CN110664593A (en) * | 2019-08-21 | 2020-01-10 | 重庆邮电大学 | Hololens-based blind navigation system and method |
CN110837814A (en) * | 2019-11-12 | 2020-02-25 | 深圳创维数字技术有限公司 | Vehicle navigation method, device and computer readable storage medium |
CN111743740A (en) * | 2020-06-30 | 2020-10-09 | 平安国际智慧城市科技股份有限公司 | Blind guiding method and device, blind guiding equipment and storage medium |
WO2022078513A1 (en) * | 2020-10-16 | 2022-04-21 | 北京猎户星空科技有限公司 | Positioning method and apparatus, self-moving device, and storage medium |
WO2022151560A1 (en) * | 2021-01-14 | 2022-07-21 | 北京工业大学 | Smart cane for blind people based on mobile wearable computing and fast deep neural network |
CN113624236A (en) * | 2021-08-06 | 2021-11-09 | 西安电子科技大学 | Mobile device-based navigation system and navigation method for blind people |
CN113963254A (en) * | 2021-08-30 | 2022-01-21 | 武汉众智鸿图科技有限公司 | Vehicle-mounted intelligent inspection method and system integrating target identification |
Non-Patent Citations (2)
Title |
---|
Development of a guide dog system for the blind people with character recognition ability;K. Iwatsuka 等;《Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.》;20040920;第1-4页 * |
深度学习下盲人避撞路径导航方法研究;张海民 等;《南京信息工程大学学报(自然科学版)》;20220328;第14卷(第2期);第220-226页 * |
Also Published As
Publication number | Publication date |
---|---|
CN115218918A (en) | 2022-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12046006B2 (en) | LIDAR-to-camera transformation during sensor calibration for autonomous vehicles | |
CN105512646B (en) | A kind of data processing method, device and terminal | |
CN112328730B (en) | Map data updating method, related device, equipment and storage medium | |
CN110322702A (en) | A kind of Vehicular intelligent speed-measuring method based on Binocular Stereo Vision System | |
US11625851B2 (en) | Geographic object detection apparatus and geographic object detection method | |
CN110285793A (en) | A kind of Vehicular intelligent survey track approach based on Binocular Stereo Vision System | |
US20200082614A1 (en) | Intelligent capturing of a dynamic physical environment | |
WO2018126228A1 (en) | Sign and lane creation for high definition maps used for autonomous vehicles | |
CN112674998B (en) | Blind person traffic intersection assisting method based on rapid deep neural network and mobile intelligent device | |
Ghilardi et al. | Crosswalk localization from low resolution satellite images to assist visually impaired people | |
CN114252884A (en) | Method and device for positioning and monitoring roadside radar, computer equipment and storage medium | |
CN114509060A (en) | Map generation device, map generation method, and computer program for map generation | |
JP2007004256A (en) | Image processing apparatus and image processing method | |
CN114627398A (en) | Unmanned aerial vehicle positioning method and system based on screen optical communication | |
CN118411507A (en) | Semantic map construction method and system for scene with dynamic target | |
CN114252883B (en) | Target detection method, apparatus, computer device and medium | |
CN115218918B (en) | Intelligent blind guiding method and blind guiding equipment | |
CN114252859A (en) | Target area determination method and device, computer equipment and storage medium | |
JP6916975B2 (en) | Sign positioning system and program | |
CN114252868A (en) | Laser radar calibration method and device, computer equipment and storage medium | |
CN114383594B (en) | Map generation device, map generation method, and computer program product for map generation | |
CN112818866A (en) | Vehicle positioning method and device and electronic equipment | |
RU2828682C2 (en) | Method and system for determining location and orientation of user's device with reference to visual features of environment | |
CN118172385B (en) | Unmanned aerial vehicle aerial video track data extraction method for traffic participants at complex intersections | |
CN114255264B (en) | Multi-base-station registration method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |