CN110693654B - Method and device for adjusting intelligent wheelchair and electronic equipment - Google Patents
Method and device for adjusting intelligent wheelchair and electronic equipment Download PDFInfo
- Publication number
- CN110693654B CN110693654B CN201910979882.3A CN201910979882A CN110693654B CN 110693654 B CN110693654 B CN 110693654B CN 201910979882 A CN201910979882 A CN 201910979882A CN 110693654 B CN110693654 B CN 110693654B
- Authority
- CN
- China
- Prior art keywords
- user
- information
- intelligent wheelchair
- image
- obtaining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61G—TRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
- A61G5/00—Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
- A61G5/10—Parts, details or accessories
- A61G5/1056—Arrangements for adjusting the seat
- A61G5/1059—Arrangements for adjusting the seat adjusting the height of the seat
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/01—Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
- A61B5/02055—Simultaneously evaluating both cardiovascular condition and temperature
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61G—TRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
- A61G5/00—Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
- A61G5/10—Parts, details or accessories
- A61G5/1056—Arrangements for adjusting the seat
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61G—TRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
- A61G5/00—Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
- A61G5/10—Parts, details or accessories
- A61G5/1056—Arrangements for adjusting the seat
- A61G5/1067—Arrangements for adjusting the seat adjusting the backrest relative to the seat portion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61G—TRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
- A61G2203/00—General characteristics of devices
- A61G2203/10—General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
- A61G2203/18—General characteristics of devices characterised by specific control means, e.g. for adjustment or steering by patient's head, eyes, facial muscles or voice
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61G—TRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
- A61G2203/00—General characteristics of devices
- A61G2203/10—General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
- A61G2203/20—Displays or monitors
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Physics & Mathematics (AREA)
- Cardiology (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Physiology (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Educational Technology (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Hospice & Palliative Care (AREA)
- Pulmonology (AREA)
- Developmental Disabilities (AREA)
- Child & Adolescent Psychology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure relates to a method for adjusting an intelligent wheelchair, a device for adjusting the intelligent wheelchair and an electronic device. A method of intelligently adjusting a wheelchair, comprising: acquiring user information of a user; acquiring current activity information of a user through sound and/or images acquired in real time; and adjusting the form of the intelligent wheelchair based on the user information and the activity information. Through the method provided by the disclosure, the intelligent wheelchair is adjusted according to the acquired user information, so that the shape of the wheelchair can be suitable for the user to use. The method comprises the steps of synchronously acquiring the current activity condition of a user in real time, and automatically adjusting the intelligent wheelchair in use according to the current use environment and behavior state of the user, for example: the height of the chair or the inclination angle of the chair back and the like are adjusted, so that the shape of the intelligent wheelchair after adjustment can enable a user to use the intelligent wheelchair comfortably in various environments, and the use experience of the user is improved.
Description
Technical Field
The present disclosure relates to the field of internet of things technology, and in particular, to a method for adjusting an intelligent wheelchair, an apparatus for adjusting an intelligent wheelchair, an electronic device, and a computer-readable storage medium.
Background
The wheelchair is not only a rehabilitation tool, but also an important walking tool for disabled people and people with mobility disabilities, and is convenient for exercise and daily activities. In the related art, the intelligent wheelchair is only a walking tool, so that the intelligent wheelchair is convenient to go out, and a user can adapt to the deformation of the wheelchair by artificially adjusting the deformation of the wheelchair.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method for adjusting an intelligent wheelchair, an apparatus for adjusting an intelligent wheelchair, and an electronic device.
According to an aspect of an embodiment of the present disclosure, there is provided a method of intelligent wheelchair adjustment, including: acquiring user information of a user; acquiring current activity information of a user through sound and/or images acquired in real time; and adjusting the form of the intelligent wheelchair based on the user information and the activity information.
In one embodiment, the user information includes: height information of the user; obtaining the current activity information of the user through the sound and/or the image collected in real time, wherein the current activity information comprises the following steps: collecting sound and/or images in real time; identifying a human body posture of a target object in communication with a user based on the sound and/or the image; obtaining activity information according to the body state of the human body; based on user information and activity information, adjust the form of intelligent wheelchair, include: based on the body posture and the height information of the human body, the shape of the intelligent wheelchair is adjusted.
In an embodiment, acquiring user information of a user further includes: acquiring interpersonal relationship information of a user, wherein the interpersonal relationship information comprises: name, and interpersonal relationship between the name and the user; based on user information and activity information, adjust the form of intelligent wheelchair, still include: and adjusting the form of the intelligent wheelchair based on the interpersonal relationship information and the activity information.
In another embodiment, obtaining the current activity information of the user through the sound and/or image collected in real time comprises: collecting voice of a user; carrying out voice recognition on the voice to obtain a name; obtaining an interpersonal relationship corresponding to the name based on the name; and obtaining activity information according to the interpersonal relationship.
In a further embodiment, the human relationship information further comprises: a person image corresponding to the name; obtaining the current activity information of the user through the sound and/or the image collected in real time, wherein the current activity information comprises the following steps: collecting images in real time; comparing the image with the figure image through image recognition; obtaining a interpersonal relationship corresponding to the image based on the comparison; and obtaining activity information according to the interpersonal relationship.
In one embodiment, the activity information includes: an active scene; obtaining the current activity information of the user through the sound and/or the image collected in real time, wherein the current activity information comprises the following steps: collecting images in real time; identifying the image through a neural network to obtain an activity scene corresponding to the image; based on user information and activity information, adjust the form of intelligent wheelchair, include: and adjusting the form of the intelligent wheelchair based on the user information and the activity scene.
In another embodiment, obtaining the current activity information of the user through the sound and/or image collected in real time further includes: and acquiring images in real time, acquiring the geographic position of the user, and obtaining an activity scene based on the geographic position.
In one embodiment, obtaining the current activity information of the user through the sound and/or the image collected in real time includes: acquiring emotion information of a user through sound and/or images acquired in real time; based on user information and activity information, adjust the form of intelligent wheelchair, include: and adjusting the form of the intelligent wheelchair based on the user information and the emotion information.
In another embodiment, obtaining emotional information of a user comprises: sending a request to the wearable device, receiving the heart rate state and/or the body temperature state of the user sent by the wearable device in response to the request, and acquiring emotion information of the user based on the heart rate state and/or the body temperature state.
According to another aspect of the disclosed embodiments, there is provided an apparatus for intelligent wheelchair adjustment, including: and the acquisition module is used for acquiring the user information of the user and acquiring the current activity information of the user through the sound and/or the image acquired in real time. And the adjusting module is used for adjusting the shape of the intelligent wheelchair based on the user information and the activity information.
In an embodiment, the user information comprises height information of the user. The acquisition module is also used for acquiring sound and/or images in real time; identifying a human body posture of a target object in communication with a user based on the sound and/or the image; and obtaining activity information according to the body state of the human body. And the adjusting module is used for adjusting the shape of the intelligent wheelchair based on the body posture and the height information of the human body.
In an embodiment, the obtaining module is further configured to obtain interpersonal relationship information of the user, where the interpersonal relationship information includes: name, and interpersonal relationship between the name and the user. And the adjusting module is used for adjusting the form of the intelligent wheelchair based on the interpersonal relationship information and the activity information.
In another embodiment, the obtaining module is further configured to collect a voice of the user; carrying out voice recognition on the voice to obtain a name; obtaining an interpersonal relationship corresponding to the name based on the name; and obtaining activity information according to the interpersonal relationship.
In one embodiment, the activity information includes: and (4) an active scene. The acquisition module is also used for acquiring images in real time; identifying the image through a neural network to obtain an activity scene corresponding to the image; and the adjusting module is used for adjusting the form of the intelligent wheelchair based on the user information and the activity scene.
In another embodiment, the obtaining module is further configured to: and acquiring images in real time, acquiring the geographic position of the user, and obtaining an activity scene based on the geographic position.
In an embodiment, the obtaining module is further configured to obtain emotion information of the user through the sound and/or the image collected in real time. And the adjusting module is used for adjusting the form of the intelligent wheelchair based on the user information and the emotion information.
In another embodiment, the obtaining module is further configured to send a request to the wearable device and receive a heart rate state and/or a body temperature state of the user sent by the wearable device in response to the request, and obtain emotional information of the user based on the heart rate state and/or the body temperature state.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a memory to store instructions; and the processor is used for calling the method for adjusting any one of the intelligent wheelchairs according to the instructions stored in the memory.
According to yet another aspect of an embodiment of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, perform the method of intelligent wheelchair adjustment of any one of the preceding claims.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the intelligent wheelchair is adjusted according to the acquired user information, so that the shape of the wheelchair can be suitable for the user to use. The method comprises the steps of synchronously acquiring the current activity condition of a user in real time, and automatically adjusting the intelligent wheelchair in use according to the current use environment and behavior state of the user, for example: the height of the chair or the inclination angle of the chair back and the like are adjusted, so that the shape of the intelligent wheelchair after adjustment can enable a user to use the intelligent wheelchair comfortably in various environments, and the use experience of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method of intelligent wheelchair adjustment according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating another method of intelligent wheelchair adjustment according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating yet another method of intelligent wheelchair adjustment in accordance with an exemplary embodiment.
FIG. 4 is a flow chart illustrating yet another method of intelligent wheelchair adjustment in accordance with an exemplary embodiment.
FIG. 5 is a block diagram illustrating a method and apparatus for intelligent wheelchair adjustment, according to an exemplary embodiment.
FIG. 6 is a block diagram illustrating an apparatus in accordance with an example embodiment.
FIG. 7 is a block diagram illustrating an apparatus in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In the embodiments provided in the present disclosure, the wearable device referred to below may be a device such as a smart band or a smart watch, and the terminal may be a device such as a mobile phone and a tablet, which are not limited herein.
At present, a method for adjusting an intelligent wheelchair is that a user controls the shape of the wheelchair by using a pedal plate on the wheelchair or manually adjusting the shape of the wheelchair, so that the wheelchair can be conveniently used by the user. According to the method for adjusting the intelligent wheelchair, the intelligent wheelchair can determine the current activity information of the user according to the acquired user information and the sound, the image or the sound and the image acquired in real time, and automatically adjust, so that the form of the intelligent wheelchair conforms to the use habit of the user, and the use experience of the user in using the intelligent wheelchair is improved.
FIG. 1 is a flow chart illustrating a method of intelligent wheelchair adjustment, as shown in FIG. 1, a method 10 of intelligent wheelchair adjustment, including the following steps, in accordance with an exemplary embodiment.
In step S11, user information of the user is acquired.
The intelligent wheelchair comprises a seat, a seat back, a pedal, a seat body, a seat back and the like. In one embodiment, the height of the sitting posture of the user is obtained, and the height of the seat of the intelligent wheelchair, the inclination angle of the chair back and other forms are adjusted, so that the user is at a proper height, and the intelligent wheelchair is convenient for disabled people to use.
In one embodiment, the intelligent wheelchair obtains the height information of the user through manual input of the user. In another embodiment, the whole user is scanned through a camera of the intelligent wheelchair, and the height information of the user is judged, so that the shape of the intelligent wheelchair is automatically adjusted, and the adjusted intelligent wheelchair is suitable for the user to use.
In step S12, the current activity information of the user is obtained through the sound and/or image collected in real time.
The current activity state of the user is judged by collecting the sound, the image or the sound and the image of the user and the surrounding of the user in real time, and the current activity information of the user is obtained. For example: judging that the user is interacting with the person by acquiring the image of the same person opposite to the user for a long time; or judging that the user is purchasing food by acquiring the image of the front desk of the fast food restaurant; or through the collected front image of the user facing the person, judging that the user is communicating and interacting with the person facing the person according to the collected speech of the user. The intelligent wheelchair can automatically adjust the form of the intelligent wheelchair according to the acquired activity state, and the movement of a user is conveniently matched, so that the use experience of the user is improved. The image acquisition can be achieved through a camera of the intelligent wheelchair, and the image acquisition can also be achieved through linkage with surrounding camera equipment.
In step S13, the configuration of the smart wheelchair is adjusted based on the user information and the activity information.
By acquiring the user information and the activity information, information related to the user can be acquired in many ways. The intelligent wheelchair can comprehensively know the use habits of the user, automatically adjusts according to the real-time activity state of the user, adapts to the use habits of the user and brings comfortable use experience to the user.
Through the embodiment, the intelligent wheelchair is adjusted according to the acquired user information, so that the shape of the wheelchair can be suitable for the user to use. The intelligent wheelchair obtains the current activity state of a user through real-time synchronization, automatically adjusts according to the current use environment and the behavior state of the user, and is matched with the behavior of the user, so that the user can use the intelligent wheelchair comfortably in different scenes, and the use experience of the user is improved.
Based on one inventive concept, the present disclosure also provides a flow chart of another exemplary method of intelligent wheelchair adjustment.
FIG. 2 illustrates a flow chart of another exemplary method of intelligent wheelchair adjustment. Referring to fig. 2, in this embodiment, the user information includes height information of the user, and obtaining the current activity information of the user through the real-time collected sound and/or image in step S12 may include the following steps.
In step S121, sound and/or images are collected in real time.
The voice of the user collected by the intelligent wheelchair in real time is used for judging whether the user speaks currently. And acquiring the current activity information of the user according to the speaking content.
In one embodiment, in the process of using the intelligent wheelchair by a user, the intelligent wheelchair acquires a use image of the user and an image of the surrounding environment of the user in real time, and is used for acquiring the current activity information of the user in real time, so that the form can be automatically adjusted according to the activity condition of the user, and the use of the user is facilitated.
In another embodiment, the intelligent wheelchair acquires the current activity information of the user by acquiring the use image of the user and the image of the surrounding environment of the user in real time and combining the simultaneously acquired sounds.
In step S122, a human body posture of the target object with which the user communicates is identified based on the sound and/or the image.
And carrying out voice recognition on the collected user voice to recognize whether the user is speaking. And judging whether the user is currently communicating with other people or speaking oneself according to the recognized voice content, and judging the distance between the target object and the user according to the volume of collected user voice so as to further determine the human body posture of the target object.
In a practical application scenario: and determining that the user is communicating with the target object according to the collected user voice content. When the volume of the sound communicated by the user is too large, the height difference between the target object and the user is judged to be large, and the target object is determined to be in an upright state or in an area with a large height distance from the user by combining the known current height of the user of the intelligent wheelchair. When the volume of the collected voice communicated by the user is normal or small, the height of the target object is determined to be the same as that of the user and the distance is short by combining the known current height of the user of the intelligent wheelchair, and the target object is in a sitting state.
In one embodiment, the gesture of the user is obtained by acquiring the image, and the current activity state of the user can be judged according to the gesture of the user in the recognized image. According to the recognition that the gesture of the user is in communication with the collected target object, the body posture of the target object is recognized, for example: the body posture of the target object may include: the height is high and low; the posture is standing or sitting. The user can conveniently communicate with the target object, and the feeling of discomfort brought to the user by the height difference is eliminated.
In another embodiment, the real-time collected voice of the user is obtained for determining whether the user is currently speaking. Through voice recognition, whether the user is speaking is recognized, and whether the user is currently communicating with other people or speaking by oneself is judged according to the recognized voice content. And by combining the images synchronously acquired in real time, whether the current activity state of the user is communicating with people can be confirmed according to the posture of the user on the images. When the interaction between the user and the target object is contained in the image acquired for a long time and the user is confirmed to be in conversation by combining the voice content acquired in real time, the acquired target object can be confirmed to be in interaction with the user.
In step S123, activity information is obtained according to the body posture.
According to the body state of the identified target object, the current activity information of the user can be judged, so that the intelligent wheelchair can be adjusted according to the current activity condition of the user and is matched with the current activity of the user. For example: when the human body posture of the target object is recognized to be in head-down communication, the current activity information can be judged to be that the user is in communication with other people; when the human body posture of the target object is identified to deliver the object to the user, the current activity information of the user can be judged to be shopping.
In step S13, based on the user information and the activity information, the method for adjusting the configuration of the smart wheelchair includes: based on the body posture and the height information of the human body, the shape of the intelligent wheelchair is adjusted.
According to the height information of the user and the body posture of the target object, which are recognized by the intelligent wheelchair, the intelligent wheelchair can be automatically adjusted in a targeted mode so as to be matched with the user to be convenient to use in different activity states. For example: in the real-time collected image, the user is watching a place and presents a conversation shape, and meanwhile, according to the fact that a target object is watching the user at the collected place, the fact that the user is communicating with the opposite side can be judged. And determining the current height of the user according to the height information of the user. From the acquired images, the height difference between the user and the target object of the communication can be obtained. By automatically adjusting the height of the intelligent wheelchair or adjusting the inclination angle of the chair back, a user can communicate with a target object in a comfortable posture, and the psychological discomfort caused by height difference is reduced. In one embodiment, the height of the user can be adjusted to be flush with the sight line of the target object through the acquired image, so that the two parties can be in flush alignment, and the two parties can have a fair conversation conveniently. In another embodiment, the height of the user can be adjusted to a height range suitable for communication with a target object through the collected sound, and the angle of the backrest of the intelligent wheelchair is adjusted, so that the user and the intelligent wheelchair can communicate smoothly.
Through above-mentioned embodiment, the intelligent wheelchair can carry out automatically regulated according to discerning user's activity information, makes the user when interacting with other people etc. and interdynamic, can assist the smooth activity of going on of user, reduces because the psychological discomfort that factors such as difference in height brought feels to promote user's use and experience.
In one embodiment, in step S11, the obtaining user information of the user further includes: acquiring interpersonal relationship information of a user, wherein the interpersonal relationship information comprises: name, and interpersonal relationship between the name and the user.
Acquiring interpersonal relationship information related to a user, such as: a name and an interpersonal relationship corresponding to the name. The intelligent wheelchair can judge the degree of intimacy between the user and the target object currently interacting according to interpersonal relationship information acquired in advance, so that different states are provided according to different interpersonal relationships, and good communication with the target object is facilitated. For example: through identification, when a target object interacting with a user is a relative, the chair back of the intelligent wheelchair can be slightly inclined backwards, so that the user is in a relaxed state; when the target object interacting with the user is the boss, the chair back of the intelligent wheelchair can be in an upright state, so that the user can lift the spirit to interact with the boss without losing the etiquette.
In one embodiment, obtaining user human relationship information includes: and sending a request to the terminal, receiving the interpersonal relationship sent by the terminal in response to the request, and acquiring interpersonal relationship information of the user based on the interpersonal relationship.
The intelligent wheelchair can acquire the interpersonal relationship information of the user through the terminal, such as: and acquiring an address book of the user from the mobile phone, and judging the interpersonal relationship according to the address book. The terminal may include: any terminal capable of acquiring the interpersonal relationship information of the user, such as a mobile phone, an intelligent bracelet, an intelligent watch and the like, is not limited herein.
In another example, the user can input the interpersonal relationship corresponding to the name and the name into the intelligent wheelchair through manual input, so that the interpersonal relationship information obtained by the intelligent wheelchair is more accurate.
In step S13, based on the user information and the activity information, the method for adjusting the configuration of the smart wheelchair includes: and adjusting the form of the intelligent wheelchair based on the interpersonal relationship information and the activity information.
The intelligent wheelchair can adjust the form according to the acquired interpersonal relationship, and the user can provide personalized form when interacting aiming at different people according to the acquired activity information of the user, so that the intelligent wheelchair is convenient for the user to use and is better communicated with other people. In an embodiment, a user can preset the form of the intelligent wheelchair in advance according to different interpersonal relationships, and when the target object is identified to be one of the user interpersonal relationship information, the preset form can be provided, so that the user can conveniently interact with the target object.
Based on one inventive concept, the present disclosure also provides a flow chart of another exemplary method of intelligent wheelchair adjustment.
FIG. 3 is a flow chart of yet another exemplary method of intelligent wheelchair adjustment. Referring to fig. 3, obtaining the current activity information of the user through the sound and/or image collected in real time in step S12 may further include the following steps.
In step S124, the voice of the user is collected.
Before the user interacts with the target object, the shape of the intelligent wheelchair can be adjusted in advance by acquiring the voice prompt of the user, so that the height of the seat, the inclination angle of the backrest and the like are suitable for the user to interact with the target object.
In step S125, speech recognition is performed on the speech to obtain a name.
Through the voice prompt of the user, the name provided by the user is recognized by using the voice recognition model, so that the intelligent wheelchair can be adjusted conveniently.
In step S126, the interpersonal relationship corresponding to the name is obtained based on the name.
According to the name obtained by the voice prompt of the user, the intelligent wheelchair can quickly find the interpersonal relationship corresponding to the name by utilizing the interpersonal relationship information obtained in advance, and the intelligent wheelchair can be adjusted by combining the interpersonal relationship.
In step S127, activity information is obtained from the interpersonal relationship.
And determining that the user is interacting with the acquainted person through the acquired interpersonal relationship, thereby obtaining the activity information of the current user.
Through the embodiment, the intelligent wheelchair can provide a targeted form for the user according to the interpersonal relationship between the voice recognition target object and the user, and provides a good interaction condition.
Based on one inventive concept, the present disclosure also provides a flow chart of yet another exemplary method of intelligent wheelchair adjustment.
FIG. 4 illustrates a flow chart of yet another exemplary method of intelligent wheelchair adjustment. The interpersonal relationship information further includes: the person image corresponding to the name. Referring to fig. 4, obtaining the current activity information of the user through the sound and/or image collected in real time in step S12 may further include the following steps.
In step S128, an image is acquired in real time.
The images of the activity conditions around the user are collected in real time, so that the current activity information of the user can be conveniently known.
In step S129, the image is compared with the personal image by image recognition.
The image collected in real time is compared with the figure image obtained in advance, whether the target object opposite to the current user is a person familiar to the user or not can be quickly confirmed, so that the intelligent wheelchair can be adjusted, and the intelligent wheelchair is convenient for the user to use.
In step S1210, the interpersonal relationship corresponding to the image is obtained based on the comparison.
And through comparison, the target object is confirmed to be a person familiar to the user, and the interpersonal relationship information between the target object and the user in the acquired image is obtained by combining the existing interpersonal relationship.
In step S1211, activity information is obtained according to the interpersonal relationship.
And determining that the user is interacting with the acquainted person through the acquired interpersonal relationship, thereby obtaining the activity information of the current user.
Through the embodiment, the intelligent wheelchair can quickly acquire the relation between the target object and the interpersonal relationship information by utilizing image recognition, and actively adjusts to be matched with the use of a user.
In an embodiment, in step S12, obtaining the current activity information of the user through the real-time collected sound and/or image may further include: images collected in real time; and identifying the image through a neural network to obtain an activity scene corresponding to the image.
The images of the activity conditions around the user are collected in real time, so that the current activity information of the user can be conveniently known. And identifying the acquired image through a neural network, and identifying the object in the image. The current activity scene of the user can be judged, and the activity information of the user can be conveniently obtained. For example: the image is identified by a target detection neural network model. According to the result of the neural network recognition, objects existing around the user in the image can be obtained, for example, a picture in front of the user is collected, and through the recognition, the intelligent wheelchair can obtain that the counter is arranged in front of the user. In one example, the height of the object may be determined from the distance between the object and the user using imaging principles based on identifying the object in the image. And according to the identified object and height, obtaining the activity scene of the current user. For example: according to the fact that the recognized object is a cabinet and is one meter and five in height, the situation that the user is located in a surrounding activity scene containing high cabinets can be judged, and if the recognized image contains a book, the activity scene that the user is located in front of a bookcase can be obtained.
In step S13, based on the user information and the activity information, the method for adjusting the configuration of the smart wheelchair includes: and adjusting the form of the intelligent wheelchair based on the user information and the activity scene.
And obtaining the activity information of the user according to the identified activity scene. For example: the identified scene is a bookstore, the user is reading, and the obtained activity information is that the user is reading; the identified scene is a restaurant, chopsticks and a table are arranged in front of the user, and the obtained activity information is that the user waits for dinning in the restaurant. And automatically adjusting the form of the intelligent wheelchair according to the obtained activity information and the current height state of the user. For example: the inclination of the chair back is adjusted or the height of the wheelchair used by the user is adjusted, so that the user can conveniently move in the current activity scene. For example: when a user purchases fast food restaurants, the intelligent wheelchair can automatically adjust the height of the intelligent wheelchair according to the recognized height of the current front desk, so that the user can purchase fast food at a proper height.
In the embodiment, the acquired image is used for identifying the activity scene of the user, the current activity scene of the user is confirmed according to the identified objects around the user, and the intelligent wheelchair is automatically adjusted according to the activity scene of the user, so that the height, the inclination angle and other forms of the intelligent wheelchair conform to the current activity state of the user and are suitable for the user.
In another embodiment, in step S12, obtaining the current activity information of the user through the real-time collected sound and/or image, further includes: and acquiring images in real time, acquiring the geographic position of the user, and obtaining an activity scene based on the geographic position.
The intelligent wheelchair can judge the current surrounding environment of the user through the real-time collected images, determine the current geographic position of the user through recognizing scenery in the images, and further obtain the current activity scene of the user. In one embodiment, the intelligent wheelchair can acquire the current geographic position of the user through positioning, determine the activity environment around the user through the geographic position, and predict the impending deformation. The activity scene of discerning after the cooperation is gathered can be quick for the user provides accurate deformation, convenience of customers uses.
In another embodiment, the intelligent wheelchair has a positioning function, the currently acquired coordinates are sent to the terminal, and the geographic position corresponding to the current coordinates is determined by using a positioning map of the terminal, so that the geographic position of the current user is determined.
In another embodiment, the intelligent wheelchair can acquire the current geographic position of the user through a terminal carried by the user, and the current position can be quickly acquired by utilizing the positioning function of the terminal.
In an embodiment, in step S12, obtaining the current activity information of the user through the real-time collected sound and/or image may further include: and acquiring emotion information of the user through the sound and/or the image acquired in real time.
The emotion information of the user is acquired so as to adapt to the situation that the user can safely use the intelligent wheelchair under different moods, and unnecessary safety risks are reduced.
In one embodiment, a real-time collected sound is obtained; recognizing the voice emotion corresponding to the voice through voice recognition; and acquiring emotion information of the user through the voice emotion.
The voice of the user is collected through the voice collection module carried by the intelligent wheelchair. And performing voice recognition on the collected voice of the user by using a voice recognition model to acquire the emotion information of the user, namely happiness or anger.
In another embodiment, obtaining emotional information of the user further comprises: acquiring a real-time acquired image; based on the image, identifying the corresponding facial emotion in the image through a face identification model; and acquiring emotion information of the user through the facial emotion.
The method comprises the steps of collecting facial images of a user, identifying the collected facial images by using a face identification model, and identifying the facial emotion of the user at the moment. And acquiring whether the emotion information of the user is happy or angry according to the recognized facial emotion.
In one embodiment, a mode of combining voice recognition and face recognition can be adopted to process the collected user voice and facial images together, so that the obtained emotion information of the user is more accurate, and the intelligent wheelchair can be effectively adjusted.
In yet another embodiment, obtaining emotional information of a user comprises: sending a request to the wearable device, receiving the heart rate state and/or the body temperature state of the user sent by the wearable device in response to the request, and acquiring emotion information of the user based on the heart rate state and/or the body temperature state.
The wearable device interacts with the smart wheelchair. Through the heart rate state or the body temperature state of the user that wearable equipment obtained, can judge the fluctuation of emotion of current user and change, for example: a sudden increase in heart rate or a sudden increase in body temperature can indicate that the user's mood is strongly stimulated. Based on the interaction, the intelligent wheelchair can acquire the emotion information of the user according to the heart rate state and the body temperature state acquired by the wearable device. In an embodiment, the heart rate state and the body temperature state can be simultaneously used as conditions for acquiring emotion information of a user, so that the intelligent wheelchair can be effectively adjusted.
In one embodiment, the current state of the user can be jointly judged by randomly combining the emotion information with the acquired interpersonal relationship information and activity information, so that the intelligent wheelchair can accurately adjust the deformation and is suitable for the user to use.
In step S13, based on the user information and the activity information, the method for adjusting the configuration of the smart wheelchair includes: based on the user information and the mood information, the modality is adjusted.
According to the obtained emotion information of the user, the shape of the intelligent wheelchair is automatically adjusted, so that the shapes of the height of the intelligent wheelchair seat, the inclination angle of the chair back and the like are suitable for the user, and the safe driving is improved.
Through the embodiment, the intelligent wheelchair changes the form of the intelligent wheelchair through acquiring the emotion information of the user, so that the user can safely use the intelligent wheelchair, and the use experience is improved.
Fig. 5 is a block diagram of an apparatus 100 for controlling an electric blanket, according to an exemplary embodiment. Referring to fig. 5, the apparatus may include: an acquisition module 110 and an adjustment module 120.
The obtaining module 110 is configured to obtain user information of a user, and obtain current activity information of the user through sound and/or images collected in real time.
And an adjusting module 120 for adjusting the shape of the intelligent wheelchair based on the user information and the activity information.
In one embodiment, the user information includes: height information of the user. The acquisition module 110 is further configured to acquire sound and/or images in real time; identifying a human body posture of a target object in communication with a user based on the sound and/or the image; and obtaining activity information according to the body state of the human body. An adjustment module 120 configured to: based on the body posture and the height information of the human body, the shape of the intelligent wheelchair is adjusted.
In an embodiment, the obtaining module 110 is further configured to obtain interpersonal relationship information of the user, where the interpersonal relationship information includes: name, and interpersonal relationship between the name and the user. And the adjusting module 120 is used for adjusting the shape of the intelligent wheelchair based on the interpersonal relationship information and the activity information.
In an embodiment, the obtaining module 110 is further configured to collect a voice of the user; carrying out voice recognition on the voice to obtain a name; obtaining an interpersonal relationship corresponding to the name based on the name; and obtaining activity information according to the interpersonal relationship.
In another embodiment, the human relationship information further includes: a person image corresponding to the name; the acquisition module 110 is further configured to acquire an image in real time; comparing the image with the figure image through image recognition; obtaining a interpersonal relationship corresponding to the image based on the comparison; and obtaining activity information according to the interpersonal relationship.
In one embodiment, the activity information includes: and (4) an active scene. The obtaining module 110 is further configured to collect images in real time; and identifying the image through a neural network to obtain an activity scene corresponding to the image. An adjustment module to: and adjusting the form of the intelligent wheelchair based on the user information and the activity scene.
In another embodiment, the obtaining module 110 is further configured to collect the image in real time, obtain the geographic location of the user, and obtain the activity scene based on the geographic location.
In an embodiment, the obtaining module 110 is further configured to obtain emotion information of the user by collecting sound and/or images in real time. And the adjusting module 120 is used for adjusting the shape of the intelligent wheelchair based on the user information and the emotion information.
In another embodiment, the obtaining module 110 is further configured to send a request to the wearable device and receive a heart rate state and/or a body temperature state of the user sent by the wearable device in response to the request, and obtain emotional information of the user based on the heart rate state and/or the body temperature state.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 6 is a block diagram illustrating an apparatus 200 for intelligent wheelchair adjustment according to an exemplary embodiment. For example, the apparatus 200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, the apparatus 200 may include one or more of the following components: a processing component 202, a memory 204, a power component 206, a multimedia component 208, an audio component 210, an input/output (I/O) interface 212, a sensor component 214, and a communication component 216.
The processing component 202 generally controls overall operation of the device 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 202 may include one or more processors 220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 202 can include one or more modules that facilitate interaction between the processing component 202 and other components. For example, the processing component 202 can include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
The multimedia component 208 includes a screen that provides an output interface between the device 200 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 200 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 210 is configured to output and/or input audio signals. For example, audio component 210 includes a Microphone (MIC) configured to receive external audio signals when apparatus 200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 204 or transmitted via the communication component 216. In some embodiments, audio component 210 also includes a speaker for outputting audio signals.
The I/O interface 212 provides an interface between the processing component 202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 214 includes one or more sensors for providing various aspects of status assessment for the device 200. For example, the sensor component 214 may detect an open/closed state of the device 200, the relative positioning of components, such as a display and keypad of the apparatus 200, the sensor component 214 may also detect a change in position of the apparatus 200 or a component of the apparatus 200, the presence or absence of user contact with the apparatus 200, orientation or acceleration/deceleration of the apparatus 200, and a change in temperature of the apparatus 200. The sensor assembly 214 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate wired or wireless communication between the apparatus 200 and other devices. The device 200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 216 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 204, comprising instructions executable by processor 220 of device 200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, wherein instructions of the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the above-described method of intelligent wheelchair tuning.
Fig. 7 is a block diagram illustrating an apparatus 300 for intelligent wheelchair adjustment, according to an exemplary embodiment. For example, the apparatus 300 may be provided as a server. Referring to FIG. 7, apparatus 300 includes a processing component 322 that further includes one or more processors and memory resources, represented by memory 332, for storing instructions, such as applications, that are executable by processing component 322. The application programs stored in memory 332 may include one or more modules that each correspond to a set of instructions. Further, the processing component 322 is configured to execute instructions to perform the above-described methods.
The apparatus 300 may also include a power component 326 configured to perform power management of the apparatus 300, a wired or wireless network interface 350 configured to connect the apparatus 300 to a network, and an input/output (I/O) interface 358. The apparatus 300 may operate based on an operating system stored in the memory 332, such as Windows Server, Mac OSXTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise forms described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (20)
1. A method of intelligent wheelchair adjustment, comprising:
acquiring user information of a user, wherein the user information is used for automatically matching a form suitable for the user by the intelligent wheelchair, and the user information comprises: the height information of the user or the interpersonal relationship information of the user;
obtaining current activity information of the user through sound and/or images acquired in real time;
and adjusting the shape of the intelligent wheelchair based on the user information and the activity information.
2. The method of intelligent wheelchair adjustment of claim 1, wherein the user information comprises: height information of the user;
the obtaining of the current activity information of the user through the sound and/or the image collected in real time comprises:
acquiring the sound and/or the image in real time;
identifying a human body posture of a target object in communication with the user based on the sound and/or the image;
obtaining the activity information according to the human body posture;
the adjusting the shape of the intelligent wheelchair based on the user information and the activity information comprises:
and adjusting the shape of the intelligent wheelchair based on the human body shape and the height information.
3. The method of intelligent wheelchair adjustment of claim 1,
the acquiring user information of the user further includes: acquiring interpersonal relationship information of the user, wherein the interpersonal relationship information comprises: a name, and an interpersonal relationship between the name and the user;
the adjusting the shape of the intelligent wheelchair based on the user information and the activity information comprises:
and adjusting the form of the intelligent wheelchair based on the interpersonal relationship information and the activity information.
4. The method for adjusting an intelligent wheelchair as claimed in claim 3, wherein the obtaining the current activity information of the user through the sound and/or image collected in real time comprises:
collecting the voice of the user;
carrying out voice recognition on the voice to obtain the name;
obtaining the interpersonal relationship corresponding to the name based on the name;
and obtaining the activity information according to the interpersonal relationship.
5. The method of intelligent wheelchair adjustment of claim 3, wherein the interpersonal relationship information further comprises: a person image corresponding to the name;
the obtaining of the current activity information of the user through the sound and/or the image collected in real time comprises:
acquiring the image in real time;
comparing the image with the person image through image recognition;
obtaining the interpersonal relationship corresponding to the image based on the comparison;
and obtaining the activity information according to the interpersonal relationship.
6. The method of intelligent wheelchair adjustment of claim 1, wherein the activity information comprises: an active scene;
the obtaining of the current activity information of the user through the sound and/or the image collected in real time comprises:
acquiring the image in real time;
identifying the image through a neural network to obtain an activity scene corresponding to the image;
the adjusting the shape of the intelligent wheelchair based on the user information and the activity information comprises:
and adjusting the shape of the intelligent wheelchair based on the user information and the activity scene.
7. The method for intelligent wheelchair tuning of claim 6 wherein the obtaining current activity information of the user via real-time captured sounds and/or images further comprises:
and acquiring the image in real time, acquiring the geographic position of the user, and obtaining the activity scene based on the geographic position.
8. The method for adjusting an intelligent wheelchair as claimed in claim 1, wherein the obtaining the current activity information of the user through the sound and/or image collected in real time comprises:
acquiring emotion information of the user by acquiring the sound and/or the image in real time;
the adjusting the shape of the intelligent wheelchair based on the user information and the activity information comprises:
and adjusting the shape of the intelligent wheelchair based on the user information and the emotion information.
9. The method of intelligent wheelchair adjustment of claim 8, wherein the obtaining emotional information of the user comprises:
sending a request to a wearable device, receiving the heart rate state and/or the body temperature state of the user sent by the wearable device in response to the request, and acquiring emotion information of the user based on the heart rate state and/or the body temperature state.
10. An apparatus for intelligent wheelchair adjustment, comprising:
the acquisition module is used for acquiring user information of a user and acquiring current activity information of the user through sound and/or images acquired in real time, the user information is used for the intelligent wheelchair to automatically match a form suitable for the user, and the user information comprises: the height information of the user or the interpersonal relationship information of the user;
and the adjusting module is used for adjusting the shape of the intelligent wheelchair based on the user information and the activity information.
11. The apparatus for intelligent wheelchair adjustment of claim 10, wherein the user information comprises: height information of the user;
the obtaining module is further configured to:
acquiring the sound and/or the image in real time;
identifying a human body posture of a target object in communication with the user based on the sound and/or the image;
obtaining the activity information according to the human body posture;
the adjusting module is further configured to: and adjusting the shape of the intelligent wheelchair based on the human body shape and the height information.
12. The device for intelligent wheelchair adjustment of claim 10,
the obtaining module is further configured to: acquiring interpersonal relationship information of the user, wherein the interpersonal relationship information comprises: a name, and an interpersonal relationship between the name and the user;
the adjustment module includes: and adjusting the form of the intelligent wheelchair based on the interpersonal relationship information and the activity information.
13. The device for intelligent wheelchair adjustment of claim 12, wherein the obtaining module is further configured to:
collecting the voice of the user;
carrying out voice recognition on the voice to obtain the name;
obtaining the interpersonal relationship corresponding to the name based on the name;
and obtaining the activity information according to the interpersonal relationship.
14. The apparatus for intelligent wheelchair adjustment of claim 12, wherein the interpersonal relationship information further comprises: a person image corresponding to the name;
the obtaining module is further configured to:
acquiring the image in real time;
comparing the image with the person image through image recognition;
obtaining the interpersonal relationship corresponding to the image based on the comparison;
and obtaining the activity information according to the interpersonal relationship.
15. The apparatus for intelligent wheelchair adjustment of claim 10, wherein the activity information comprises: an active scene;
the obtaining module is further configured to:
acquiring the image in real time;
identifying the image through a neural network to obtain an activity scene corresponding to the image;
the adjusting module is further configured to:
and adjusting the shape of the intelligent wheelchair based on the user information and the activity scene.
16. The device for intelligent wheelchair adjustment of claim 15, wherein the obtaining module is further configured to:
and acquiring the image in real time, acquiring the geographic position of the user, and obtaining the activity scene based on the geographic position.
17. The device for intelligent wheelchair adjustment of claim 10, wherein the obtaining module is further configured to:
acquiring emotion information of the user by acquiring the sound and/or the image in real time;
the adjusting module is further configured to:
and adjusting the shape of the intelligent wheelchair based on the user information and the emotion information.
18. The device for intelligent wheelchair adjustment of claim 17, wherein the obtaining module is further configured to:
sending a request to a wearable device, receiving the heart rate state and/or the body temperature state of the user sent by the wearable device in response to the request, and acquiring emotion information of the user based on the heart rate state and/or the body temperature state.
19. An electronic device, wherein the electronic device comprises:
a memory to store instructions; and
a processor for invoking the memory stored instructions to perform a method of intelligent wheelchair adjustment of any of claims 1-9.
20. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a processor, perform the method of intelligent wheelchair adjustment of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910979882.3A CN110693654B (en) | 2019-10-15 | 2019-10-15 | Method and device for adjusting intelligent wheelchair and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910979882.3A CN110693654B (en) | 2019-10-15 | 2019-10-15 | Method and device for adjusting intelligent wheelchair and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110693654A CN110693654A (en) | 2020-01-17 |
CN110693654B true CN110693654B (en) | 2021-11-09 |
Family
ID=69199868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910979882.3A Active CN110693654B (en) | 2019-10-15 | 2019-10-15 | Method and device for adjusting intelligent wheelchair and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110693654B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113268014B (en) * | 2020-02-14 | 2024-07-09 | 阿里巴巴集团控股有限公司 | Carrier, facility control method, device, system and storage medium |
CN111324074A (en) * | 2020-03-22 | 2020-06-23 | 上海宏勃生物科技发展有限公司 | Intelligent altar wheel control system |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CH713464B1 (en) * | 2007-08-24 | 2018-08-15 | Levo Ag Wohlen | Vehicle with center-wheel drive, in particular wheelchair or upright wheelchair. |
DK177506B1 (en) * | 2012-05-08 | 2013-08-12 | Wolturnus As | Multi-adjustable wheelchair with closed frame and impact-resistant front suspension |
US20150209207A1 (en) * | 2014-01-30 | 2015-07-30 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Seating function monitoring and coaching system |
CN204890386U (en) * | 2015-04-08 | 2015-12-23 | 广州博斯特智能科技有限公司 | Modularization intelligence wheelchair |
CN106550160B (en) * | 2015-09-17 | 2020-06-09 | 中国电信股份有限公司 | Motor vehicle configuration method and system |
CN107296693A (en) * | 2017-05-05 | 2017-10-27 | 燕山大学 | A kind of multi-control modes electric wheelchair |
CN109660969A (en) * | 2017-10-10 | 2019-04-19 | 张尉恒 | Vehicle user information collects application method and corresponding equipment and system |
CN107640252A (en) * | 2017-11-01 | 2018-01-30 | 程炽坤 | A kind of highly automated regulating system of shared bicycle seat and its method |
CN109875777B (en) * | 2019-02-19 | 2021-08-31 | 西安科技大学 | Fetching control method of wheelchair with fetching function |
CN110025437A (en) * | 2019-03-28 | 2019-07-19 | 房歆哲 | A kind of intelligent wheel chair |
CN110012104A (en) * | 2019-04-12 | 2019-07-12 | 深圳市班玛智行科技有限公司 | A kind of social intercourse system and method based on artificial intelligence and technology of Internet of things |
CN110013261B (en) * | 2019-05-24 | 2022-03-08 | 京东方科技集团股份有限公司 | Emotion monitoring method and device, electronic equipment and storage medium |
-
2019
- 2019-10-15 CN CN201910979882.3A patent/CN110693654B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110693654A (en) | 2020-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109446876B (en) | Sign language information processing method and device, electronic equipment and readable storage medium | |
CN107347135B (en) | Photographing processing method and device and terminal equipment | |
US10110395B2 (en) | Control method and control device for smart home device | |
CN105450736B (en) | Method and device for connecting with virtual reality | |
US9848796B2 (en) | Method and apparatus for controlling media play device | |
CN106375782B (en) | Video playing method and device | |
US11057728B2 (en) | Information processing apparatus, information processing method, and program | |
US20160330548A1 (en) | Method and device of optimizing sound signal | |
WO2016173243A1 (en) | Method and apparatus for information broadcast | |
CN110693654B (en) | Method and device for adjusting intelligent wheelchair and electronic equipment | |
US20210029304A1 (en) | Methods for generating video, electronic device and storage medium | |
CN109799968A (en) | Adjusting method, wearable device and the computer readable storage medium of device voice volume | |
CN106126082B (en) | Terminal control method and device and terminal | |
CN109257498B (en) | Sound processing method and mobile terminal | |
CN107666536B (en) | Method and device for searching terminal | |
US9361316B2 (en) | Information processing apparatus and phrase output method for determining phrases based on an image | |
CN109819167B (en) | Image processing method and device and mobile terminal | |
CN111741394A (en) | Data processing method and device and readable medium | |
CN111698600A (en) | Processing execution method and device and readable medium | |
US11544968B2 (en) | Information processing system, information processingmethod, and recording medium | |
CN111739528A (en) | Interaction method and device and earphone | |
US20210174823A1 (en) | System for and Method of Converting Spoken Words and Audio Cues into Spatially Accurate Caption Text for Augmented Reality Glasses | |
KR20160016216A (en) | System and method for real-time forward-looking by wearable glass device | |
US11270682B2 (en) | Information processing device and information processing method for presentation of word-of-mouth information | |
CN112187996A (en) | Information adjusting method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |