CN117001715A - Intelligent auxiliary system and method for visually impaired people - Google Patents

Intelligent auxiliary system and method for visually impaired people Download PDF

Info

Publication number
CN117001715A
CN117001715A CN202311105770.8A CN202311105770A CN117001715A CN 117001715 A CN117001715 A CN 117001715A CN 202311105770 A CN202311105770 A CN 202311105770A CN 117001715 A CN117001715 A CN 117001715A
Authority
CN
China
Prior art keywords
information
target
instruction
distance
wearer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311105770.8A
Other languages
Chinese (zh)
Inventor
程明
姜力
戴景辉
彭椿皓
李正辰
孙赫文
杨大鹏
王滨
刘宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202311105770.8A priority Critical patent/CN117001715A/en
Publication of CN117001715A publication Critical patent/CN117001715A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Prostheses (AREA)

Abstract

The application relates to the technical field of robots, in particular to an intelligent auxiliary system and method for vision dysfunction patients; the device comprises a voice device, a shooting device, a central processing device, a distance sensing device and a manipulator; the central processing device is used for: determining a target operation instruction and an instruction target of a wearer according to voice information acquired by a voice device; determining the position of the instruction target in the image information according to the characteristic information of the instruction target and the image information acquired by the shooting device; determining a target operation point position according to the distance information between the instruction target and the wearer, which is acquired by the distance sensing device; generating a guide instruction according to the distance information and the target operation point position; when a wearer reaches a target operation point, acquiring relative distance information of the manipulator and an instruction target according to a proximity sensor of the manipulator; and generating an auxiliary signal according to the distance information and the relative distance information. The method realizes that the vision-impaired person independently completes the target instruction operation in a strange environment.

Description

Intelligent auxiliary system and method for visually impaired people
Technical Field
The application relates to the technical field of robots, in particular to an intelligent auxiliary system and method for vision dysfunction patients.
Background
For patients who lose vision, both out and in-house movement are limited to manipulating objects. A series of assistance techniques and devices studied in the prior art are currently focused mainly on the outdoor navigation assistance direction and the mobile assistance function. For example, a blind guiding walking stick and the like provide positioning information through technologies such as a global positioning system, image recognition and the like, plan a travel route for a wearer by combining road condition information, prompt a blind person to advance according to a planned route in a mode such as voice information and the like, and achieve better results for the outdoor mobile navigation of the blind person. However, when a person with vision impairment performs a living operation indoors, the person often determines a target position for the living operation according to partial path guidance and muscle memory of the person in a familiar environment, which results in that the person with vision impairment cannot perform a relatively accurate operation in an unfamiliar indoor environment.
Disclosure of Invention
The problem addressed by the present application is how to assist in guiding visually impaired persons to perform a targeted operation.
In order to solve the above problems, the present application provides an intelligent auxiliary system and method for vision dysfunction patients.
In a first aspect, the application provides an intelligent auxiliary system for visually impaired people, which comprises a voice device, a shooting device, a central processing device, a distance sensing device and a manipulator, wherein the central processing device is respectively in communication connection with the voice device, the shooting device, the distance sensing device and the manipulator; the central processing device is used for:
determining a target operation instruction and an instruction target of a wearer according to the voice information acquired by the voice device;
determining the position of the instruction target in the image information according to the characteristic information of the instruction target and the image information acquired by the shooting device;
determining a target operation point position according to the distance information between the instruction target and the wearer, which is acquired by the distance sensing device; generating a guide instruction according to the distance information and the target operation point position, wherein the guide instruction is used for assisting the wearer to reach the target operation point position;
when the wearer reaches the target operation point, acquiring relative distance information between the manipulator and the instruction target according to a proximity sensor of the manipulator; generating an auxiliary signal according to the distance information and the relative distance information, wherein the auxiliary signal is used for controlling a feedback array unit of the manipulator to send out guide information according to a preset auxiliary information transmission strategy; the guiding information is used for assisting the wearer in adjusting the arms to reach the target working pose.
The beneficial effects of the application are as follows: collecting voice information of a wearer through a voice device, delivering the voice information to a central processing unit for recognition, acquiring instruction content in the voice information and a target object of the instruction, acquiring an image through a shooting device, acquiring characteristic information carried in the image information through recognition of the central processing unit, matching the characteristic information with a target instruction to determine the position of the instruction target in the image information, acquiring distance information between the target and the wearer through a distance sensing device, determining an operation point position corresponding to the instruction target, and guiding the wearer to the target point position by a generated guiding instruction; generating control instructions according to the distance between the manipulator device and the target, which is obtained by the proximity sensor of the manipulator device, and the distance information, which is obtained by combining the image information, so that the array feedback unit performs vibration feedback according to a preset guiding rule, thereby prompting the related guiding information such as the forward and backward movement, lifting and putting down of the arm of a wearer; the method and the device realize that under an indoor complex environment, a wearer is assisted to find and grasp an instruction target under the indoor environment according to the voice instruction of the wearer, and living tasks such as drinking water, sitting and the like are completed, so that a visually impaired person can independently complete target operation under a strange environment.
Optionally, the determining, according to the feature information of the instruction target and the image information acquired by the shooting device, the position of the instruction target in the image information includes:
dividing the image information according to the characteristics of a preset instruction target to obtain a divided image;
performing feature comparison and feature assignment on the segmented image according to preset matching weights;
and obtaining the confidence coefficient of each image information according to the feature assignment, and determining the segmented image as the instruction target when the confidence coefficient is greater than or equal to a confidence coefficient threshold value.
Optionally, the shooting device includes a binocular camera, and the distance information between the instruction target and the wearer acquired by the distance sensing device includes:
respectively acquiring the original distance of the instruction target in the binocular camera image according to the distance sensing device;
and correcting the original distance according to a preset binocular camera distance and a binocular camera visual normal angle to obtain the distance information between the instruction target and the wearer.
Optionally, the correcting the original distance according to the preset binocular camera distance and the binocular camera visual normal angle includes:
according to the visual normal angle of the binocular camera, respectively performing matrix transformation on the image information of the binocular camera to obtain a preprocessed image, wherein the preprocessed image is a same plane mapping image with the same optical axis;
the distance information between the instruction target and the wearer is obtained based on a geometric relationship between the pre-processed image and the binocular camera distance.
Optionally, the generating the guiding instruction according to the distance information and the target job point location includes:
obtaining a collision distance between an obstacle and the wearer according to the preprocessed image;
generating a shortest obstacle avoidance route according to the distance information and the collision distance;
generating an auxiliary guiding instruction according to the shortest obstacle avoidance route, wherein the auxiliary guiding instruction comprises voice information and vibration signals, the voice information is used for assisting in guiding the wearer to walk, and the vibration signals are used for controlling a feedback array unit at the trunk according to the auxiliary information transmission strategy.
Optionally, the proximity sensor according to the manipulator acquires a relative distance relation between the manipulator and the instruction target; generating an auxiliary signal from the distance information and the relative distance information comprises:
acquiring the relative distance between the manipulator and the instruction target;
when the relative distance is greater than or equal to a preset proximity threshold, generating an auxiliary positioning signal according to the distance information and the relative distance, wherein the auxiliary positioning signal is used for controlling the feedback array unit to generate a direction vibration signal according to the auxiliary information transmission strategy, and the direction vibration signal is used for guiding a wearer to adjust the arm pose;
when the relative distance is smaller than a preset proximity threshold, an auxiliary reminding signal is generated, the auxiliary reminding signal is used for controlling the feedback array unit to generate a flicker vibration signal, and the flicker vibration signal is used for reminding a wearer of completing hand actions.
Optionally, the manipulator comprises a prosthetic device communicatively connected to the central processing device, the central processing device further configured to:
when the wearer reaches the target operation point position, generating a driving instruction according to the relative distance relation and the correction distance, wherein the driving instruction is used for controlling a driving mechanism of the artificial limb device to adjust the pose of the artificial limb device according to target instruction parameters;
and when the prosthetic device reaches the target pose, generating a tail end operation instruction according to the target operation instruction, wherein the tail end operation instruction is used for controlling the prosthetic device to finish hand actions at the tail end.
Optionally, the controlling the feedback array unit to vibrate according to a preset auxiliary information transmission policy includes:
obtaining a moving direction and a moving distance in a space range according to the target moving pose and the current manipulator pose in the auxiliary signal; controlling the feedback array unit to sequentially generate vibration signals along the moving direction; and when the manipulator moves the action distance along the moving direction, controlling the feedback array unit to flicker and vibrate.
Optionally, the auxiliary signal includes manipulator displacement information, and the auxiliary information transmission strategy includes:
when the auxiliary signal comprises the horizontal displacement information of the manipulator, the feedback array unit is controlled to vibrate sequentially along the horizontal direction of the displacement information, so as to guide the arm of the wearer to advance and retreat or rotate horizontally;
when the auxiliary signal comprises the vertical displacement information of the manipulator, the feedback array unit is controlled to vibrate from edge to center or from center to edge, and the feedback array unit is used for guiding the arm of the wearer to lift or lower.
In a second aspect, the present application provides an intelligent assistance method for a person with visual dysfunction, which is applied to the intelligent assistance system for a person with visual dysfunction according to any one of the first aspect, and the intelligent assistance method for a person with visual dysfunction includes:
determining a target operation instruction and an instruction target of a wearer according to voice information acquired by a voice device;
determining the position of the instruction target in the image information according to the characteristic information of the instruction target and the image information acquired by the shooting device;
determining a target operation point position according to the distance information between the instruction target and the wearer, which is acquired by the distance sensing device; generating a guide instruction according to the distance information and the target operation point position, wherein the guide instruction is used for assisting the wearer to reach the target operation point position;
when the wearer reaches the target operation point, acquiring relative distance information between the manipulator and the instruction target according to a proximity sensor of the manipulator; generating an auxiliary signal according to the distance information and the relative distance information, wherein the auxiliary signal is used for controlling a feedback array unit of the manipulator to send out guide information according to a preset auxiliary information transmission strategy; the guiding information is used for assisting the wearer in adjusting the arms to reach the target working pose.
The intelligent auxiliary method for the vision dysfunction person has the same advantages as the intelligent auxiliary system for the vision dysfunction person compared with the prior art, and is not repeated here.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent auxiliary system for visually impaired people according to an embodiment of the present application;
FIG. 2 is a schematic diagram of geometric relationships for providing binocular camera image correction according to an embodiment of the present application;
fig. 3 is a schematic diagram of mapping relationship between binocular camera images to the same plane according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the application will be readily understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. While the application is susceptible of embodiment in the drawings, it is to be understood that the application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided to provide a more thorough and complete understanding of the application. It should be understood that the drawings and embodiments of the application are for illustration purposes only and are not intended to limit the scope of the present application.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments"; the term "optionally" means "alternative embodiments". Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like herein are merely used for distinguishing between different devices, modules, or units and not for limiting the order or interdependence of the functions performed by such devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those skilled in the art will appreciate that "one or more" is intended to be construed as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the devices in the embodiments of the present application are for illustrative purposes only and are not intended to limit the scope of such messages or information.
As shown in fig. 1, an embodiment of the present application provides an intelligent auxiliary system for visually impaired people, which includes a voice device, a photographing device, a central processing unit, a distance sensing device and a manipulator, wherein the central processing unit is respectively in communication connection with the voice device, the photographing device, the distance sensing device and the manipulator; the central processing device is used for:
determining a target operation instruction and an instruction target of a wearer according to the voice information acquired by the voice device;
specifically, in this embodiment, devices such as a bluetooth headset or an integrated helmet with a headset may be adopted, a voice system recognition module is adopted to obtain voice command information collected by the headset, and a preset semantic recognition model is used to judge the command content therein and determine the object of the target operation, for example, recognize that the voice information of the wearer is "i want to drink water", judge and determine that the task is to grasp the cup, and determine that the task target is the cup.
Furthermore, the voice recognition further comprises an authentication unit, and the collected voice is matched with preset user voice information, so that accurate control and recognition of the instruction are realized, and the problem that the instruction recognition is inaccurate due to the fact that the earphone collects voices of other people in the environment information is avoided.
Determining the position of the instruction target in the image information according to the characteristic information of the instruction target and the image information acquired by the shooting device;
specifically, in this embodiment, devices such as smart glasses with binocular cameras or an integrated helmet provided with binocular cameras may be used, and image information under respective viewing angles may be acquired through the binocular cameras, and labels of objects identified in the image information may be determined and matched with the instruction targets.
Determining a target operation point position according to the distance information between the instruction target and the wearer, which is acquired by the distance sensing device; generating a guide instruction according to the distance information and the target operation point position, wherein the guide instruction is used for assisting the wearer to reach the target operation point position;
specifically, in this embodiment, distance information between a target and a wearer in an image of the binocular camera is obtained through the photoelectric sensor, and corrected based on a geometric relationship between the eyes of the binocular camera, so as to obtain a position relationship and a corrected distance between an actual wearer and a target object, and further determine a target operation point position capable of operating the target in a reasonable operation space range. And determining the position relation between the operation target and the wearer according to the image information, determining the distance between other obstacles in the image information and the wearer, and determining the shortest distance which does not contact the obstacle to guide the wearer to move to the target operation point.
When the wearer reaches the target operation point, acquiring relative distance information between the manipulator and the instruction target according to a proximity sensor of the manipulator; generating an auxiliary signal according to the distance information and the relative distance information, wherein the auxiliary signal is used for controlling a feedback array unit of the manipulator to send out guide information according to a preset auxiliary information transmission strategy; the guiding information is used for assisting the wearer in adjusting the arms to reach the target working pose.
Further, the auxiliary signal is configured to control the feedback array unit to vibrate according to a preset auxiliary information transmission policy, including:
obtaining a moving direction and a moving distance in a space range according to the target moving pose and the current manipulator pose in the auxiliary signal; controlling the feedback array unit to sequentially generate vibration signals along the moving direction; and when the manipulator moves the action distance along the moving direction, controlling the feedback array unit to flicker and vibrate.
Specifically, taking a drinking operation as an example in this embodiment, a wearer determines that the drinking cup arrives near a table according to a guiding instruction, then determines that the drinking cup arrives near a target according to voice information of an earphone, and starts to take the drinking cup, at this time, based on distance information obtained by correction and proximity sensor information transmitted by a manipulator, a position of the drinking cup in a direction of the manipulator is determined, for example, the drinking cup is located at a position of 10cm in front of the left side of the manipulator, and forms an included angle of 45 ° with an extending direction of the manipulator, then a central processing device determines a relationship between the drinking cup and the manipulator and specifies movement guiding information, and by controlling an array feedback unit arranged on the manipulator to vibrate, according to rolling vibration from a near end of an arm to a far end of the arm, the wearer is prompted to move forward in a current direction, when the manipulator satisfies a unique distance in a front-rear direction, the robot is guided to move leftwards according to rolling vibration arranged on a left end of the right end of the manipulator, so as to prompt the wearer to need to move the arm horizontally, and when the manipulator moves to the position of the target, the manipulator is controlled by a downward finger, the whole array feedback unit is controlled to integrally and twinkle three times at intervals of 1 second, so as to prompt the whole body to shake the whole array feedback unit to shake, and complete the grasping instruction of the wearer to grasp the target.
In this embodiment, only the instruction for the mechanical arm to drink water is displayed, and after the instruction is obtained by the wearer, the position of the cup is determined according to the image information, so that the wearer is guided to reach the target position and the water drinking action is completed. For the operation instructions in other space ranges, such as taking high objects, carrying indoor objects and the like, the process is not repeated, and the operation instructions are executed according to the steps of collecting, identifying and guiding.
In the embodiment, voice information of a wearer is acquired through a voice device and is recognized by a central processing unit, instruction content in the voice information and a target object of the instruction are acquired, image acquisition is carried out through a shooting device, characteristic information carried in the image information is obtained through recognition of the central processing unit, the characteristic information is matched with a target instruction to determine the position of the instruction target in the image information, distance information between the target and the wearer is obtained through a distance sensing device, an operation point position corresponding to the instruction target is determined, and the generated guiding instruction guides the wearer to the target point position; generating control instructions according to the distance between the manipulator device and the target, which is obtained by the proximity sensor of the manipulator device, and the distance information, which is obtained by combining the image information, so that the array feedback unit performs vibration feedback according to a preset guiding rule, thereby prompting the related guiding information such as the forward and backward movement, lifting and putting down of the arm of a wearer; the method and the device realize that under an indoor complex environment, a wearer is assisted to find and grasp an instruction target under the indoor environment according to the voice instruction of the wearer, and living tasks such as drinking water, sitting and the like are completed, so that a visually impaired person can independently complete target operation under a strange environment.
In an optional embodiment, the determining, according to the feature information of the instruction target and the image information acquired by the shooting device, the position of the instruction target in the image information includes:
dividing the image information according to the characteristics of a preset instruction target to obtain a divided image;
performing feature comparison and feature assignment on the segmented image according to preset matching weights;
and obtaining the confidence coefficient of each image information according to the feature assignment, and determining the segmented image as the instruction target when the confidence coefficient is greater than or equal to a confidence coefficient threshold value.
Specifically, when identifying the target in the instruction information, performing preliminary division according to the characteristics of the target, for example, when matching a drinking instruction, performing independent division on color blocks with similar cylindrical shapes in an image area in a picture, then performing matching operation on the color blocks respectively, and giving assignment according to the matching degree, for example, the target color block is provided with a grip handle, the size of the target corresponding to the target color block is between 10cm and 30cm, suspected water exists in the target color block, and the like, and finally obtaining the divided color block with weight assignment according to the matching degree assignment of the characteristics and the conditions of the divided color block, wherein the confidence degree of the color block is considered to meet the target requirement when the weight assignment in the color block exceeds 0.5; further, when a plurality of confidence degrees exist in the picture to meet the target requirement, determining the segmented image with the highest weight assignment as an instruction target.
In the embodiment, the image is preprocessed according to the basic characteristics of the instruction target, and then the instruction target is obtained by only carrying out characteristic matching on the preprocessed segmented image, so that the calculated amount required by identifying the target is reduced, the time required by identifying the target is reduced, and the agility of the system is improved; and when the segmented images are matched, defining assignment in the basic features through other related features, obtaining the final matching degree of the target and the segmented images and obtaining the target with the highest confidence coefficient to finish the recognition process.
In an optional embodiment, the photographing device includes a binocular camera, and the distance information between the instruction target and the wearer acquired by the distance sensing device includes:
respectively acquiring the original distance of the instruction target in the binocular camera image according to the distance sensing device;
and correcting the original distance according to a preset binocular camera distance and a binocular camera visual normal angle to obtain the distance information between the instruction target and the wearer.
Further, the correcting the original distance according to the preset binocular camera distance and the binocular camera visual normal angle includes:
according to the visual normal angle of the binocular camera, respectively performing matrix transformation on the image information of the binocular camera to obtain a preprocessed image, wherein the preprocessed image is a same plane mapping image with the same optical axis;
the distance information between the instruction target and the wearer is obtained based on a geometric relationship between the pre-processed image and the binocular camera distance.
Specifically, taking the binocular camera to acquire and correct the distance of the water cup as an example, respectively acquiring the distance of the water cup in each image of the binocular camera based on the photoelectric sensor, as shown in fig. 2, setting a fixed distance x between two lenses of the binocular camera, and determining the correction distance between the target and the center of the binocular camera according to the geometric relationship between the two lenses, wherein the binocular camera does not acquire image information in the same visual plane due to the condition of the visual range and the installation surface, so that the correction is performed by adopting the mapping shown in fig. 3, m1 is a first image, m2 is a second image, m3 is a first projection image, m4 is a second projection image, and then obtaining the correction distance based on the geometric relationship.
In the embodiment, by using the binocular camera, the distance relation between the wearer and the target is obtained, the corrected distance is obtained, and visual images of different surfaces are processed based on the mapping, so that accurate corrected image information is obtained, and the accuracy of target identification is improved.
In an alternative embodiment, the generating the guiding instruction according to the distance information and the target job point location includes:
obtaining a collision distance between an obstacle and the wearer according to the preprocessed image;
generating a shortest obstacle avoidance route according to the distance information and the collision distance;
generating an auxiliary guiding instruction according to the shortest obstacle avoidance route, wherein the auxiliary guiding instruction comprises voice information and vibration signals, the voice information is used for assisting in guiding the wearer to walk, and the vibration signals are used for controlling a feedback array unit at the trunk according to the auxiliary information transmission strategy.
Specifically, distances between each object in the image information and a wearer are obtained based on binocular cameras respectively, non-instruction targets in the distances are marked as obstacles, the distances between the obstacles and the wearer are determined, a certain threshold range around the obstacles is set as an avoidance zone, then a shortest path avoiding the avoidance zone is selected between the wearer and the targets as a walking route, and the wearer is guided to a target point through voice information in the earphone; the trunk of the wearer is further provided with a second array feedback unit, and the wearer can be prompted to turn information through transverse sequential vibration according to the advancing of the route and the planning of the route.
In the embodiment, the obstacle avoidance route is planned through the corrected image information, so that the obstacle avoidance effect and accuracy are effectively improved, the voice is combined to guide the distance to travel, the vibration unit guides the rotation angle, and the guiding effect and accuracy are improved.
In an optional embodiment, the proximity sensor according to the manipulator acquires a relative distance relation between the manipulator and the instruction target; generating an auxiliary signal from the distance information and the relative distance information comprises:
acquiring the relative distance between the manipulator and the instruction target;
when the relative distance is greater than or equal to a preset proximity threshold, generating an auxiliary positioning signal according to the distance information and the relative distance, wherein the auxiliary positioning signal is used for controlling the feedback array unit to generate a direction vibration signal according to the auxiliary information transmission strategy, and the direction vibration signal is used for guiding a wearer to adjust the arm pose;
when the relative distance is smaller than a preset proximity threshold, an auxiliary reminding signal is generated, the auxiliary reminding signal is used for controlling the feedback array unit to generate a flicker vibration signal, and the flicker vibration signal is used for reminding a wearer of completing hand actions.
Specifically, in this embodiment, the mechanical arm is a wearable exoskeleton mechanical arm, an array feedback unit is laid on a contact surface between the mechanical arm and a human arm, when a wearer reaches a target operation point, a correction distance obtained according to image information and distance information obtained by a proximity sensor arranged on the mechanical arm are controlled to sequentially vibrate by the array feedback unit of loop cloth, so that the movement direction of the arm is guided according to the vibration direction of the subunit, and when the hand position reaches the vicinity of a cup, a vibration unit at the back of the hand releases a signal to prompt the wearer to complete the grasping action.
In the embodiment, the exoskeleton mechanical arm guides a wearer to perform arm activities in a space range, so that the pose of the arm pose is adjusted, the hand position meets the condition of the instruction target operation, and the operation instruction is transmitted to a human body through touch sense through prompt information.
In an alternative embodiment, the manipulator comprises a prosthetic device communicatively coupled to the central processing device, the central processing device further configured to:
when the wearer reaches the target operation point position, generating a driving instruction according to the relative distance relation and the correction distance, wherein the driving instruction is used for controlling a driving mechanism of the artificial limb device to adjust the pose of the artificial limb device according to target instruction parameters;
and when the prosthetic device reaches the target pose, generating a tail end operation instruction according to the target operation instruction, wherein the tail end operation instruction is used for controlling the prosthetic device to finish hand actions at the tail end.
Specifically, in this optional embodiment, in order to meet the daily operation requirement of the visually impaired person with the disabled limbs, the manipulator device includes a mechanical artificial limb, taking a drinking instruction as an example, when the wearer reaches a target point, the mechanical artificial limb collects the adjacent distance between the mechanical artificial limb and the water cup according to the correction distance of visual information and the proximity sensor, then the central processing device issues instructions to adjust parameters of each rotary driving structure of the mechanical artificial limb according to the distance information, so as to realize the pose adjustment of the mechanical artificial limb, enable the mechanical artificial limb to reach a position meeting the gripping condition and perform a preparation action, then finish the gripping operation based on the pressure sensor of the hand structure of the mechanical artificial limb, and combine the image vision to adjust the gripping target to the front of the mouth to finish the drinking operation, then control the rotary driving structure of the mechanical artificial limb to perform parameter change, return along the original path so as to realize the downward resetting of the water cup, and finish the response process for the target instruction.
In the embodiment, the mechanical artificial limb is adopted to grasp and reconstruct the disabled person, the mechanical artificial limb is used for replacing the mechanical arm device, and the visual auxiliary guiding system is combined through the communication connection between the mechanical artificial limb and the central processing device, so that the life assistance of the disabled person with disabled limbs is realized, and the independent execution of basic life operation is realized.
In an alternative embodiment, the auxiliary signal includes manipulator displacement information, and the auxiliary information delivery strategy includes:
when the auxiliary signal comprises the horizontal displacement information of the manipulator, the feedback array unit is controlled to vibrate sequentially along the horizontal direction of the displacement information, so as to guide the arm of the wearer to advance and retreat or rotate horizontally;
when the auxiliary signal comprises the vertical displacement information of the manipulator, the feedback array unit is controlled to vibrate from edge to center or from center to edge, and the feedback array unit is used for guiding the arm of the wearer to lift or lower.
Specifically, taking a process of taking a cup as an example, a proximity sensor on a manipulator determines the position relation and the direction of the cup relative to the manipulator, distance information determined by image information determines the spatial relation between the manipulator and the cup, and an auxiliary control signal is issued by a central processing device to control an array feedback unit to vibrate, specifically, the array feedback unit firstly vibrates sequentially along the horizontal direction of the cup and the manipulator along the sequence from near to far, and guides an operation arm to extend forwards to move to the same projection position of the cup in the horizontal direction, then vibrates towards the center along the edge, and prompts a wearer arm to reduce to reach a gripping area; it will be appreciated that the array feedback unit prompts the wearer to contract the arm when vibrating in the far to near direction and prompts the wearer to raise the arm when vibrating in the center to edge direction.
In this embodiment, through a preset information transmission policy, invisible image information and related features are transmitted to human skin through tactile vibration, so that a visually impaired person complements the inconvenience of vision deficiency through tactile sensation, and the ability of the visually impaired person to operate alone in a complex environment is improved.
In a second aspect, the present application provides an intelligent assistance method for a person with visual dysfunction, which is applied to the intelligent assistance system for a person with visual dysfunction according to any one of the first aspect, and the intelligent assistance method for a person with visual dysfunction includes:
determining a target operation instruction and an instruction target of a wearer according to voice information acquired by a voice device;
determining the position of the instruction target in the image information according to the characteristic information of the instruction target and the image information acquired by the shooting device;
determining a target operation point position according to the distance information between the instruction target and the wearer, which is acquired by the distance sensing device; generating a guide instruction according to the distance information and the target operation point position, wherein the guide instruction is used for assisting the wearer to reach the target operation point position;
when the wearer reaches the target operation point, acquiring relative distance information between the manipulator and the instruction target according to a proximity sensor of the manipulator; generating an auxiliary signal according to the distance information and the relative distance information, wherein the auxiliary signal is used for controlling a feedback array unit of the manipulator to send out guide information according to a preset auxiliary information transmission strategy; the guiding information is used for assisting the wearer in adjusting the arms to reach the target working pose.
The intelligent auxiliary method for the vision dysfunction person has the same advantages as the intelligent auxiliary system for the vision dysfunction person compared with the prior art, and is not repeated here.
An electronic device that can be a server or a client of the present application will now be described, which is an example of a hardware device that can be applied to aspects of the present application. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
The electronic device includes a computing unit that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) or a computer program loaded from a storage unit into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device may also be stored. The computing unit, ROM and RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like. In the present application, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present application. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Although the application is disclosed above, the scope of the application is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the application, and these changes and modifications will fall within the scope of the application.

Claims (10)

1. The intelligent auxiliary system for visually impaired people is characterized by comprising a voice device, a shooting device, a central processing device, a distance sensing device and a manipulator, wherein the central processing device is respectively in communication connection with the voice device, the shooting device, the distance sensing device and the manipulator; the central processing device is used for:
determining a target operation instruction and an instruction target of a wearer according to the voice information acquired by the voice device;
determining the position of the instruction target in the image information according to the characteristic information of the instruction target and the image information acquired by the shooting device;
determining a target operation point position according to the distance information between the instruction target and the wearer, which is acquired by the distance sensing device; generating a guide instruction according to the distance information and the target operation point position, wherein the guide instruction is used for assisting the wearer to reach the target operation point position;
when the wearer reaches the target operation point, acquiring relative distance information between the manipulator and the instruction target according to a proximity sensor of the manipulator; generating an auxiliary signal according to the distance information and the relative distance information, wherein the auxiliary signal is used for controlling a feedback array unit of the manipulator to send out guide information according to a preset auxiliary information transmission strategy; the guiding information is used for assisting the wearer in adjusting the arms to reach the target working pose.
2. The intelligent assistance system for a visually impaired according to claim 1, wherein the determining the position of the instruction target in the image information based on the feature information of the instruction target and the image information acquired by the photographing device includes:
dividing the image information according to the characteristics of a preset instruction target to obtain a divided image;
performing feature comparison and feature assignment on the segmented image according to preset matching weights;
and obtaining the confidence coefficient of each image information according to the feature assignment, and determining the segmented image as the instruction target when the confidence coefficient is greater than or equal to a confidence coefficient threshold value.
3. The intelligent assistance system for visually impaired according to claim 1, wherein the photographing device comprises a binocular camera, and the distance information between the instruction target and the wearer acquired by the distance sensing device comprises:
respectively acquiring the original distance of the instruction target in the binocular camera image according to the distance sensing device;
and correcting the original distance according to a preset binocular camera distance and a binocular camera visual normal angle to obtain the distance information between the instruction target and the wearer.
4. The intelligent assistance system for visually impaired according to claim 3, wherein said correcting the original distance according to a preset binocular camera pitch and binocular camera visual normal angle comprises:
according to the visual normal angle of the binocular camera, respectively performing matrix transformation on the image information of the binocular camera to obtain a preprocessed image, wherein the preprocessed image is a same plane mapping image with the same optical axis;
the distance information between the instruction target and the wearer is obtained based on a geometric relationship between the pre-processed image and the binocular camera distance.
5. The vision-dysfunctional intelligent assistance system of claim 4, wherein the generating guidance instructions from the distance information and the target job point location comprises:
obtaining a collision distance between an obstacle and the wearer according to the preprocessed image;
generating a shortest obstacle avoidance route according to the distance information and the collision distance;
generating an auxiliary guiding instruction according to the shortest obstacle avoidance route, wherein the auxiliary guiding instruction comprises voice information and vibration signals, the voice information is used for assisting in guiding the wearer to walk, and the vibration signals are used for controlling a feedback array unit at the trunk according to the auxiliary information transmission strategy.
6. The vision-dysfunctional-oriented intelligent assistance system of claim 1, wherein the proximity sensor of the manipulator obtains a relative distance relationship between the manipulator and the command target; generating an auxiliary signal from the distance information and the relative distance information comprises:
acquiring the relative distance between the manipulator and the instruction target;
when the relative distance is greater than or equal to a preset proximity threshold, generating an auxiliary positioning signal according to the distance information and the relative distance, wherein the auxiliary positioning signal is used for controlling the feedback array unit to generate a direction vibration signal according to the auxiliary information transmission strategy, and the direction vibration signal is used for guiding a wearer to adjust the arm pose;
when the relative distance is smaller than a preset proximity threshold, an auxiliary reminding signal is generated, the auxiliary reminding signal is used for controlling the feedback array unit to generate a flicker vibration signal, and the flicker vibration signal is used for reminding a wearer of completing hand actions.
7. The vision-dysfunctional intelligent assistance system of any one of claims 1-6, wherein the manipulator comprises a prosthetic device communicatively coupled to the central processing device, the central processing device further configured to:
when the wearer reaches the target operation point position, generating a driving instruction according to the relative distance relation and the correction distance, wherein the driving instruction is used for controlling a driving mechanism of the artificial limb device to adjust the pose of the artificial limb device according to target instruction parameters;
and when the prosthetic device reaches the target pose, generating a tail end operation instruction according to the target operation instruction, wherein the tail end operation instruction is used for controlling the prosthetic device to finish hand actions at the tail end.
8. The intelligent assistance system for visually impaired persons according to any one of claims 1 to 6, wherein the assistance signal for controlling the feedback array unit to vibrate according to a preset assistance information delivery strategy comprises:
obtaining a moving direction and a moving distance in a space range according to the target moving pose and the current manipulator pose in the auxiliary signal; controlling the feedback array unit to sequentially generate vibration signals along the moving direction; and when the manipulator moves the action distance along the moving direction, controlling the feedback array unit to flicker and vibrate.
9. The vision-dysfunctional intelligent assistance system of claim 1, wherein the assistance signal comprises manipulator displacement information, the assistance information delivery strategy comprising:
when the auxiliary signal comprises the horizontal displacement information of the manipulator, the feedback array unit is controlled to vibrate sequentially along the horizontal direction of the displacement information, so as to guide the arm of the wearer to advance and retreat or rotate horizontally;
when the auxiliary signal comprises the vertical displacement information of the manipulator, the feedback array unit is controlled to vibrate from edge to center or from center to edge, and the feedback array unit is used for guiding the arm of the wearer to lift or lower.
10. A vision-dysfunctional-person-oriented intelligent assistance method applied to the vision-dysfunctional-person-oriented intelligent assistance system according to any one of claims 1 to 9, the vision-dysfunctional-person-oriented intelligent assistance method comprising:
determining a target operation instruction and an instruction target of a wearer according to voice information acquired by a voice device;
determining the position of the instruction target in the image information according to the characteristic information of the instruction target and the image information acquired by the shooting device;
determining a target operation point position according to the distance information between the instruction target and the wearer, which is acquired by the distance sensing device; generating a guide instruction according to the distance information and the target operation point position, wherein the guide instruction is used for assisting the wearer to reach the target operation point position;
when the wearer reaches the target operation point, acquiring relative distance information between the manipulator and the instruction target according to a proximity sensor of the manipulator; generating an auxiliary signal according to the distance information and the relative distance information, wherein the auxiliary signal is used for controlling a feedback array unit of the manipulator to send out guide information according to a preset auxiliary information transmission strategy; the guiding information is used for assisting the wearer in adjusting the arms to reach the target working pose.
CN202311105770.8A 2023-08-30 2023-08-30 Intelligent auxiliary system and method for visually impaired people Pending CN117001715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311105770.8A CN117001715A (en) 2023-08-30 2023-08-30 Intelligent auxiliary system and method for visually impaired people

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311105770.8A CN117001715A (en) 2023-08-30 2023-08-30 Intelligent auxiliary system and method for visually impaired people

Publications (1)

Publication Number Publication Date
CN117001715A true CN117001715A (en) 2023-11-07

Family

ID=88563799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311105770.8A Pending CN117001715A (en) 2023-08-30 2023-08-30 Intelligent auxiliary system and method for visually impaired people

Country Status (1)

Country Link
CN (1) CN117001715A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609504A (en) * 2009-07-21 2009-12-23 华中科技大学 A kind of method for detecting, distinguishing and locating infrared imagery sea-surface target
CN103271784A (en) * 2013-06-06 2013-09-04 山东科技大学 Man-machine interactive manipulator control system and method based on binocular vision
CN104473717A (en) * 2014-12-04 2015-04-01 上海交通大学 Wearable guide apparatus for totally blind people
CN105787442A (en) * 2016-02-19 2016-07-20 电子科技大学 Visual interaction based wearable auxiliary system for people with visual impairment, and application method thereof
CN108564602A (en) * 2018-04-16 2018-09-21 北方工业大学 Airplane detection method based on airport remote sensing image
CN110340893A (en) * 2019-07-12 2019-10-18 哈尔滨工业大学(威海) Mechanical arm grasping means based on the interaction of semantic laser
CN110559127A (en) * 2019-08-27 2019-12-13 上海交通大学 intelligent blind assisting system and method based on auditory sense and tactile sense guide
KR20200038017A (en) * 2018-10-02 2020-04-10 (주)네모 System and method for providing information service for people with visually impairment
CN111283689A (en) * 2020-03-26 2020-06-16 长春大学 Device for assisting movement of limb dysfunction patient and control method
CN112587285A (en) * 2020-12-10 2021-04-02 东南大学 Multi-mode information guide environment perception myoelectricity artificial limb system and environment perception method
CN113180894A (en) * 2021-04-27 2021-07-30 浙江大学 Visual intelligence-based hand-eye coordination method and device for multiple-obstacle person
CN113377097A (en) * 2021-01-25 2021-09-10 杭州易享优智能科技有限公司 Path planning and obstacle avoidance method for blind person guide
CN114282052A (en) * 2021-12-24 2022-04-05 空间视创(重庆)科技股份有限公司 Video image positioning method and system based on frame characteristics
CN115218903A (en) * 2022-05-12 2022-10-21 北京具身智能科技有限公司 Object searching method and system for visually impaired people
CN115690217A (en) * 2022-11-11 2023-02-03 高德软件有限公司 Tree position determining method, medium and computing device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609504A (en) * 2009-07-21 2009-12-23 华中科技大学 A kind of method for detecting, distinguishing and locating infrared imagery sea-surface target
CN103271784A (en) * 2013-06-06 2013-09-04 山东科技大学 Man-machine interactive manipulator control system and method based on binocular vision
CN104473717A (en) * 2014-12-04 2015-04-01 上海交通大学 Wearable guide apparatus for totally blind people
CN105787442A (en) * 2016-02-19 2016-07-20 电子科技大学 Visual interaction based wearable auxiliary system for people with visual impairment, and application method thereof
CN108564602A (en) * 2018-04-16 2018-09-21 北方工业大学 Airplane detection method based on airport remote sensing image
KR20200038017A (en) * 2018-10-02 2020-04-10 (주)네모 System and method for providing information service for people with visually impairment
CN110340893A (en) * 2019-07-12 2019-10-18 哈尔滨工业大学(威海) Mechanical arm grasping means based on the interaction of semantic laser
CN110559127A (en) * 2019-08-27 2019-12-13 上海交通大学 intelligent blind assisting system and method based on auditory sense and tactile sense guide
CN111283689A (en) * 2020-03-26 2020-06-16 长春大学 Device for assisting movement of limb dysfunction patient and control method
CN112587285A (en) * 2020-12-10 2021-04-02 东南大学 Multi-mode information guide environment perception myoelectricity artificial limb system and environment perception method
CN113377097A (en) * 2021-01-25 2021-09-10 杭州易享优智能科技有限公司 Path planning and obstacle avoidance method for blind person guide
CN113180894A (en) * 2021-04-27 2021-07-30 浙江大学 Visual intelligence-based hand-eye coordination method and device for multiple-obstacle person
CN114282052A (en) * 2021-12-24 2022-04-05 空间视创(重庆)科技股份有限公司 Video image positioning method and system based on frame characteristics
CN115218903A (en) * 2022-05-12 2022-10-21 北京具身智能科技有限公司 Object searching method and system for visually impaired people
CN115690217A (en) * 2022-11-11 2023-02-03 高德软件有限公司 Tree position determining method, medium and computing device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱爱斌;何大勇;罗文成;陈渭;: "基于双目视觉方法的可穿戴式导盲机器人研究", 机械设计与研究, no. 05, 20 October 2016 (2016-10-20) *
王元元: "路面形貌数字化测量技术研究与应用", 31 January 2022, 西南交通大学出版社, pages: 52 - 54 *

Similar Documents

Publication Publication Date Title
US10384348B2 (en) Robot apparatus, method for controlling the same, and computer program
Schröer et al. An autonomous robotic assistant for drinking
US10157313B1 (en) 3D gaze control of robot for navigation and object manipulation
US20200055195A1 (en) Systems and Methods for Remotely Controlling a Robotic Device
US20170266019A1 (en) Control of Limb Device
US20160250751A1 (en) Providing personalized patient care based on electronic health record associated with a user
CN112587285B (en) Multi-mode information guide environment perception myoelectric artificial limb system and environment perception method
JP2006247780A (en) Communication robot
US9613505B2 (en) Object detection and localized extremity guidance
JP2013111737A (en) Robot apparatus, control method thereof, and computer program
JPH11198075A (en) Behavior support system
Bao et al. Vision-based autonomous walking in a lower-limb powered exoskeleton
WO2023143408A1 (en) Article grabbing method for robot, device, robot, program, and storage medium
Grewal et al. Autonomous wheelchair navigation in unmapped indoor environments
CN115698631A (en) Walking-aid robot navigation method, walking-aid robot and computer readable storage medium
KR20220058941A (en) direction assistance system
CN113876556A (en) Three-dimensional laser scanning massage robot system
JP2007130691A (en) Communication robot
CN117001715A (en) Intelligent auxiliary system and method for visually impaired people
CN116100565A (en) Immersive real-time remote operation platform based on exoskeleton robot
Yang et al. Head-free, human gaze-driven assistive robotic system for reaching and grasping
CN115063879A (en) Gesture recognition device, moving object, gesture recognition method, and storage medium
Rohmer et al. Laser based driving assistance for smart robotic wheelchairs
Chee et al. Eye Tracking Electronic Wheelchair for physically challenged person
JP2020151012A (en) Communication system, and control method of communication system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination