CN109808711B - Automatic driving vehicle control method and system, automatic driving vehicle and visual prosthesis - Google Patents

Automatic driving vehicle control method and system, automatic driving vehicle and visual prosthesis Download PDF

Info

Publication number
CN109808711B
CN109808711B CN201811598562.5A CN201811598562A CN109808711B CN 109808711 B CN109808711 B CN 109808711B CN 201811598562 A CN201811598562 A CN 201811598562A CN 109808711 B CN109808711 B CN 109808711B
Authority
CN
China
Prior art keywords
eyeball
vehicle
prosthesis
information
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811598562.5A
Other languages
Chinese (zh)
Other versions
CN109808711A (en
Inventor
韩嘉宁
谢非
夏邵君
卢华昊
韩雨桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Huamai Robot Technology Co ltd
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Priority to CN201811598562.5A priority Critical patent/CN109808711B/en
Publication of CN109808711A publication Critical patent/CN109808711A/en
Application granted granted Critical
Publication of CN109808711B publication Critical patent/CN109808711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Steering Control In Accordance With Driving Conditions (AREA)

Abstract

The invention provides a method and a system for controlling an automatic driving vehicle based on a visual prosthesis, the automatic driving vehicle and the visual prosthesis, wherein the method comprises the following steps: receiving eyeball control information sent by a visual prosthesis, wherein the generation step of the eyeball control information comprises the following steps: detecting an eyeball movement signal of a user through a visual prosthesis, extracting eyeball track information based on the eyeball movement signal of the user through the visual prosthesis, and matching and judging the eyeball track information with a preset movement track; when the eyeball track information is matched with a preset motion track, taking the eyeball track information as eyeball control information; and converting the eyeball control information into a corresponding control instruction, and executing corresponding operation based on the control instruction. The invention can realize that the blind person controls the automatic driving vehicle through the visual prosthesis, improves the intelligence of the automatic driving vehicle and the visual prosthesis, and is particularly suitable for the intelligent driving of the blind person in a specific scene.

Description

Automatic driving vehicle control method and system, automatic driving vehicle and visual prosthesis
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method and a system for controlling an automatic driving vehicle based on a visual prosthesis, the automatic driving vehicle and the visual prosthesis.
Background
The automatic driving technology is a bridge meeting the driving desire of blind people for vehicles for blind people, is beneficial to the independent trip of the blind people, and improves the independent living ability of the blind people.
However, when the vehicle is automatically driven, the driver may be required to intervene in driving in an emergency, and the existing automatic driving control system cannot support the intervention driving of the blind, so that the intelligence is low.
Disclosure of Invention
The invention mainly aims to provide an automatic driving vehicle control method based on a visual prosthesis, and aims to solve the technical problems that the existing automatic driving control system cannot support the intervention driving of blind people and is low in intelligence.
In order to achieve the above object, the present invention provides an autonomous vehicle control method based on a visual prosthesis, comprising the steps of:
receiving eyeball control information sent by a visual prosthesis, wherein the generation step of the eyeball control information comprises the following steps: detecting an eyeball movement signal of a user through a visual prosthesis, extracting eyeball track information based on the eyeball movement signal of the user through the visual prosthesis, and matching and judging the eyeball track information with a preset movement track; when the eyeball track information is matched with a preset motion track, taking the eyeball track information as eyeball control information;
and converting the eyeball control information into a corresponding control instruction, and executing corresponding operation based on the control instruction.
Optionally, the step of detecting the eye movement signal of the user by the visual prosthesis is preceded by:
determining the current driving mode level of the vehicle;
the step of converting the eyeball control information into a corresponding control instruction and executing a corresponding operation based on the control instruction comprises:
when the vehicle is in a first-level automatic driving mode, generating a corresponding target visual field instruction according to a preset visual field instruction generation rule and the eyeball control information;
and adjusting the shooting angle of the visual field camera based on the target visual field instruction.
Optionally, the target visual field instruction corresponding to the preset visual field instruction generation rule and the eyeball control information is generated; the step of adjusting the shooting angle of the visual field camera based on the target visual field instruction comprises the following steps:
receiving head action information sent by a visual prosthesis;
calculating a target change track of a shooting angle of a visual field camera according to the head action information and the eyeball control information;
and adjusting the shooting angle of the visual field camera according to the target change track.
Optionally, the step of calculating a target change trajectory of a shooting angle of a visual field camera according to the head motion information and the eyeball control information includes:
analyzing the head action information to obtain the eye movement direction and the eye movement amplitude;
analyzing the eyeball control information to obtain the eyeball movement direction and the eyeball movement amplitude;
and determining the change direction and the change amplitude of the visual field focus of the user based on the eye movement direction and the eye movement amplitude, and the eyeball movement direction and the eyeball movement amplitude, and taking the change direction and the change amplitude of the visual field focus as a target change track of the shooting angle of the visual field camera.
Optionally, the step of converting the eyeball control information into a corresponding control instruction and executing a corresponding operation based on the control instruction includes:
and inquiring a preset driving instruction list when the vehicle is in a secondary automatic driving mode, acquiring a target driving instruction corresponding to the eyeball control information, and controlling the vehicle to execute corresponding driving operation based on the target driving instruction.
Optionally, the step of obtaining the target driving instruction corresponding to the eyeball control information includes:
calculating a predicted driving area of the vehicle according to the target driving instruction and the current position of the vehicle;
acquiring a road condition image on the predicted driving area, performing object recognition on the road condition image, and judging whether an obstacle exists on the predicted driving area;
and if the obstacle exists in the expected driving area, outputting an abnormal warning.
Optionally, the visual prosthesis-based autonomous vehicle control method further comprises:
when the vehicle is in a secondary automatic driving mode, detecting that a preset number of feasible roads are arranged at a preset distance in the driving direction of the vehicle, and outputting a direction inquiry prompt, wherein the preset number is greater than or equal to two;
acquiring current latest eyeball control information, determining a target road according to the latest eyeball control information, and driving towards the direction of the target road;
and controlling the vehicle to run towards the target direction.
In order to achieve the above object, the present invention further provides an autonomous vehicle, including a vehicle control center, a vision camera, a memory, and an autonomous vehicle control program based on a visual prosthesis, stored in the memory and executable by the vehicle control center, wherein the autonomous vehicle control program based on the visual prosthesis implements the steps of the method for controlling an autonomous vehicle based on a visual prosthesis when executed by the vehicle control center.
To achieve the above object, the present invention also provides a visual prosthesis comprising an in vivo device and an in vitro device connected wirelessly; the vision prosthesis also stores a vision prosthesis based autonomous vehicle control program thereon, which when executed by the vision prosthesis implements the steps of:
detecting an eyeball movement signal of a user, extracting eyeball track information by a visual prosthesis based on the eyeball movement signal of the user, and matching and judging the eyeball track information with a preset movement track; and when the eyeball track information is matched with a preset motion track, taking the eyeball track information as eyeball control information, and sending the eyeball control information to the automatic driving vehicle.
In addition, in order to achieve the above object, the present invention also provides a vision prosthesis-based autonomous vehicle control system, which includes the autonomous vehicle as described above and a vision prosthesis wired or wirelessly connected to the autonomous vehicle; the visual prosthesis is used for detecting an eyeball movement signal of a user, extracting eyeball track information based on the eyeball movement signal of the user by the visual prosthesis, and matching and judging the eyeball track information with a preset movement track; when the eyeball track information is matched with a preset motion track, the eyeball track information is used as eyeball control information, and the eyeball control information is sent to a vehicle; the automatic driving vehicle is used for receiving eyeball control information sent by a visual prosthesis, converting the eyeball control information into a corresponding control instruction and executing corresponding operation based on the control instruction.
According to the embodiment of the invention, the eyeball control information sent by the visual prosthesis corresponds to the control instruction through information interaction between the visual prosthesis and the vehicle, and the vehicle is controlled based on the control instruction, so that a user can realize control over the vehicle through the visual prosthesis, timely discovery and timely control over abnormal conditions of the automatically-driven vehicle are realized, the intelligence of the visual prosthesis and the vehicle is improved, the user friendliness of vehicle control is also improved, and the method is particularly suitable for intelligent driving of blind people in specific scenes.
Drawings
FIG. 1 is a schematic diagram of an autonomous vehicle configuration for a hardware operating environment in accordance with an embodiment of the present invention;
FIG. 2 is a schematic view of one embodiment of the visual prosthesis of the present invention;
FIG. 3 is a schematic view of one embodiment of the vision prosthesis based autonomous vehicle control system of the present invention;
FIG. 4 is a schematic view of another embodiment of the vision prosthesis based autonomous vehicle control system of the present invention;
FIG. 5 is a flowchart illustrating an embodiment of a method for controlling an autonomous vehicle based on a visual prosthesis according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an autonomous vehicle according to the present invention.
The autonomous vehicle 10 may include: a vehicle control center 101, a field of view camera 102, and a memory 103. In the autonomous vehicle 10, the vehicle control center 101 is connected to the memory 103, the memory 103 stores an autonomous vehicle control program based on the visual prosthesis, and the vehicle control center 101 may call the autonomous vehicle control program based on the visual prosthesis stored in the memory 103 and implement the following steps of the respective embodiments of the autonomous vehicle control method based on the visual prosthesis. It should be noted that "vehicle" in the claims of the present invention and in each of the embodiments described below is an abbreviation of an autonomous vehicle.
The memory 103 may be used to store software programs and various data. The memory 103 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as an autopilot control program based on a visual prosthesis), and the like; the storage data area may include a database or the like. The vehicle control center 101, which is a control center of the autonomous vehicle 10, connects various parts of the entire autonomous vehicle 10 by using various interfaces and lines, and performs various functions of the autonomous vehicle 10 and processes data by operating or executing software programs and/or modules stored in the memory 103 and calling data stored in the memory 103, thereby performing overall control of the autonomous vehicle 10.
The visual field camera 102 is used for acquiring images outside the automatic driving vehicle and sending the acquired images to the automatic driving vehicle; and the control instruction of the automatic driving vehicle can be received, and the shooting is adjusted according to the control instruction. After the autonomous vehicle is connected to the visual prosthesis, the autonomous vehicle transmits the image captured by the vision camera 102 to the visual prosthesis so that the blind user can see the corresponding image. The visual field camera 102 is rotatably connected with the vehicle base through a rotatable connecting piece, and the shooting angle of the visual field camera 102 can be converted through rotating the rotatable connecting piece between the visual field camera 102 and the base.
Further, the invention also provides a visual prosthesis, which comprises an in-vivo device and an in-vitro device which are connected wirelessly; the vision prosthesis also stores a vision prosthesis based autonomous vehicle control program, which when executed by the vision prosthesis implements the steps of:
detecting an eyeball movement signal of a user, extracting eyeball track information by a visual prosthesis based on the eyeball movement signal of the user, and matching and judging the eyeball track information with a preset movement track; and when the eyeball track information is matched with a preset motion track, taking the eyeball track information as eyeball control information, and sending the eyeball control information to the automatic driving vehicle.
Fig. 2 is a schematic view of an embodiment of the visual prosthesis of the present invention.
In the visual prosthesis 20 shown in fig. 2, an extracorporeal device 21 and an intracorporeal device 22 are included;
the extracorporeal device 21 comprises an image acquisition module 211, an image processing module 212, a data encoding module 213 and a first communication module 214 which are connected in sequence; the in-vivo device 22 comprises a second communication module 221, a filter and amplifier 222, a data decoding module 223, a microcontroller 224 and a microelectrode driving module 225 which are connected in sequence; the extracorporeal device 21 and the intracorporeal device 22 perform data communication through the first communication module 214 and the second communication module 221, and the first communication module 214 and the second communication module 221 may be radio frequency modules and perform connection communication through radio frequency signals.
The image acquisition module 211 can be a CCD camera for capturing video images and sending image signals to the image processing module 212, the image processing module 212 converts the image signals into electrical stimulation information of the visual prosthesis microelectrode array, the data encoding module 213 encodes the electrical stimulation information and sends the encoded electrical stimulation information to the second communication module 221 in the in-vivo device 22 through the first communication module 214, the second communication module 221 sends the received electrical stimulation information to the filter and amplifier 222 for signal amplification and filtering, the filtered electrical stimulation information is sent to the data decoding module 223 for data decoding, and the microcontroller 224 and the microelectrode driving module 225 drive the optic nerve to generate visual illusion according to the decoded electrical stimulation information.
Optionally, a sensor is installed in the visual prosthesis to sense eye movement of the user and collect eye movement signals of the user, the sensor may be an electrode sensor, the number of the electrode sensors may be 4 or more, micro electric pulses emitted by muscles controlling eye movement are detected by the electrodes to collect eye movement signals, and the eye movement signals are amplified, filtered and denoised by the image processing module 212 or the filter and amplifier 222 of the visual prosthesis. Optionally, a sensor for sensing the head movement of the user is further installed in the visual prosthesis, and may be an acceleration sensor and a gyroscope.
Further, the invention also provides an automatic driving vehicle control system based on the visual prosthesis, wherein the automatic driving vehicle control system comprises the automatic driving vehicle and the visual prosthesis which is in wired or wireless connection with the automatic driving vehicle;
the visual prosthesis is used for detecting an eyeball movement signal of a user, extracting eyeball track information based on the eyeball movement signal of the user by the visual prosthesis, and matching and judging the eyeball track information with a preset movement track; when the eyeball track information is matched with a preset motion track, the eyeball track information is used as eyeball control information, and the eyeball control information is sent to a vehicle;
the automatic driving vehicle is used for receiving eyeball control information sent by a visual prosthesis, converting the eyeball control information into a corresponding control instruction and executing corresponding operation based on the control instruction.
Fig. 3 is a schematic diagram of an embodiment of the vision prosthesis based autonomous vehicle control system according to the present invention.
In the present embodiment, the autonomous vehicle 10 is connected to the extracorporeal device 21 of the visual prosthesis 20, specifically, the autonomous vehicle 10 may be connected to the image processing module 212 of the extracorporeal device 21 by wire, or may be connected to the first communication module 214 of the extracorporeal device 21 wirelessly, indirectly connected to the image processing module 212, and the like; in the two connection modes, the autonomous driving vehicle 10 sends the image acquired by the visual field camera 201 to the in-vitro device, and the in-vitro device performs image processing, stimulation parameter calculation and encoding and other processing on the image and transmits the processed image to the in-vivo device 22.
Fig. 4 is a schematic diagram of another embodiment of the vision prosthesis based autonomous vehicle control system according to the present invention.
In this embodiment, the autonomous vehicle 10 is connected to the in-vivo device 22 of the visual prosthesis 20 and may be wirelessly connected to the second communication module 221, and in this connection manner, the autonomous vehicle 10 directly transmits the image captured by the visual field camera 201 to the in-vivo device 22 after performing image processing, stimulation parameter calculation coding, and the like.
Those skilled in the art will appreciate that the autonomous vehicle configuration shown in fig. 1 does not constitute a limitation of an autonomous vehicle, the visual prosthesis configuration shown in fig. 2 does not constitute a limitation of a visual prosthesis, and the visual prosthesis-based autonomous vehicle control system shown in fig. 3 or 4, nor does it constitute a limitation of such a control system, may include more or fewer components/modules than shown, or a different arrangement/module of components.
Based on the above related devices and systems, the following embodiments of the method for controlling an autonomous vehicle based on a visual prosthesis according to the present invention are proposed, wherein "user" refers to a user of the visual prosthesis.
The invention provides an automatic driving vehicle control method based on a visual prosthesis.
Referring to fig. 5, fig. 5 is a schematic flow chart of a first embodiment of the vision prosthesis-based autonomous vehicle control method according to the present invention. In this embodiment, the automatic driving vehicle control method based on the visual prosthesis of the present invention includes the following steps:
step S10, receiving eyeball control information sent by a visual prosthesis, wherein the generation step of the eyeball control information includes: detecting an eyeball movement signal of a user through a visual prosthesis, extracting eyeball track information based on the eyeball movement signal of the user through the visual prosthesis, and matching and judging the eyeball track information with a preset movement track; when the eyeball track information is matched with a preset motion track, taking the eyeball track information as eyeball control information;
the step S10 of the present embodiment is executed before the step S includes a step of connecting the visual prosthesis to the vehicle, and the visual prosthesis may be connected to the vehicle by wire or wirelessly; the vehicle control preparation operation is performed when the visual prosthesis and the vehicle detect the interconnection. The vehicle control preparation operation specifically includes:
(1) visual prosthesis: after the vision prosthesis detects that the vision prosthesis is connected with a vehicle, the sensor module can be directly started to sense the eyeball movement of a user and collect eyeball movement signals of the user, wherein the sensor module can be an electrode sensor and collects the eyeball movement signals by detecting micro electric pulses sent by muscles for controlling the eyeball movement through electrodes; and after the connection with the vehicle is detected, whether an eyeball induction instruction is received or not is detected, and the sensor module is triggered and started after the eyeball induction instruction is detected, wherein the eyeball motion induction instruction can be triggered by key input operation or voice input operation of a user.
(2) Vehicle: after the vehicle is detected to be connected with the visual prosthesis, the image shot by the visual field camera can be acquired, and the image is transmitted to the visual prosthesis, so that a user can see the road condition outside the vehicle through the visual prosthesis. The vision camera can be one or more cameras, and is used for collecting images outside the vehicle, so that the vision camera can be mounted at any position of the vehicle, which can shoot the external scene of the vehicle, such as a position above the vehicle or positions of the front, the back, the left and the right of the vehicle. After the visual field camera collects an external image, an analog signal is generated based on the collected image through a digital-to-analog conversion chip, the analog signal is decoded into an image code stream through a video decoding chip, and the image code stream is transmitted to a vehicle control center.
When the number of the visual field cameras is multiple, a main camera is preset from the visual field cameras, the main camera is used as a current lens of the visual prosthesis, and an image shot by the main camera is sent to the visual prosthesis. Optionally, according to an input operation of a user on the visual prosthesis or on the vehicle, the lens may be switched to the other vision cameras, that is, images taken by the other vision cameras are sent to the visual prosthesis, for example, the main camera is preset as a front camera of the vehicle, an image initially transmitted to the visual prosthesis is an image taken by the front camera, a camera switching instruction triggered by a user operation is detected, a target camera to which the user desires to switch is acquired according to the camera switching instruction, a current lens of the visual prosthesis is switched to the target camera, and an image taken by the target camera is transmitted to the visual prosthesis.
In one embodiment, the vehicle directly transmits an image (hereinafter referred to as a "first image") obtained from the visual field camera to an extracorporeal device of the visual prosthesis, the extracorporeal device performs image preprocessing on the first image, calculates and generates a corresponding stimulation parameter, codes the stimulation parameter and then sends the stimulation parameter to the intracorporeal device so as to stimulate the user to generate the vision corresponding to the first image; in another embodiment, the vehicle performs image preprocessing on a first image obtained from a visual field camera, calculates and generates a corresponding stimulation parameter, and transmits the stimulation parameter to an in-vivo device through an in-vitro device or directly wirelessly transmits the stimulation parameter to the in-vivo device so as to stimulate a user to generate vision corresponding to the first image; in yet another embodiment, the vehicle performs image preprocessing on a first image obtained from the visual field camera, transmits the preprocessed image to the extracorporeal device, and encodes the stimulation parameter by the extracorporeal device.
In the above embodiment, the image preprocessing performed by the in-vitro device/vehicle control center on the first image and the calculation and generation of the corresponding stimulation parameter may include: firstly, obtaining a gray level image of a first image, and preprocessing the gray level image by adopting a median filtering technology to obtain a denoised first image; performing edge extraction and edge expansion on the denoised first image to obtain edge characteristic information; according to the parameters of the microelectrode arrays in the in-vivo device, down-sampling the first image and obtaining dot matrix images which correspond to the microelectrode arrays in the in-vivo device one by one; and calculating corresponding stimulation parameters by combining the dot matrix images according to the parameters of the microelectrode stimulator.
It will be understood by those skilled in the art that, in the subsequent steps of the vision prosthesis based autonomous vehicle control method, the image captured by the vision camera is sent to the vision prosthesis according to the above description, and the details will not be described in the following.
After the vehicle control preparation operation is executed, the vision prosthesis detects the eyeball motion signal (namely the eye electrical signal) of the user at regular time or in real time, and obtains eyeball track information from the eyeball motion signal of the user, wherein the eyeball track information comprises the eyeball motion direction and the eyeball motion amplitude, and the eyeball track information can also comprise the watching time length and/or the eye movement sequence.
Before the visual prosthesis extracts the eyeball track information of the eyeball motion signal of the user, the method further comprises the following steps: the user eyeball motion signal detected by the sensor is amplified by the amplifier and converted into a digital user eyeball motion signal by the analog-to-digital conversion module, and then the user eyeball motion signal is filtered and de-noised. And then extracting eyeball track information from the eyeball motion signal after noise elimination, specifically:
extracting a feature vector from a waveform signal of an eye movement signal, wherein the feature vector can be expressed as a vector X ═ X containing k elements1,x2,x3…xk-2,xk-1,xk]Wherein, the element x is the amplitude of any point on the waveform, and the time interval between two adjacent elements is equal.
After extracting the characteristic vector, calculating Euclidean distances between the characteristic vector and a preset waveform template vector, comparing values of all the Euclidean distances and taking the minimum Euclidean distance, wherein the track information of the waveform template vector corresponding to the minimum Euclidean distance is the eyeball track information to be extracted.
Wherein, the corresponding relation between the waveform template vector and the track information of the eyeball is prestored in the visual prosthesis.
The preset motion trajectory contains the same content as the eyeball trajectory information, namely the preset motion trajectory comprises the eyeball motion direction and the eyeball motion amplitude, and the preset motion trajectory also can comprise the fixation time length and/or the eye movement sequence.
And matching and judging the extracted eyeball track information with a preset motion track, namely comparing and judging the eyeball motion direction and the eyeball motion amplitude in the eyeball track information with the eyeball motion direction and the eyeball motion amplitude in the preset motion track respectively, and matching the eyeball track information with the preset motion track when the eyeball motion directions and the eyeball motion amplitudes are the same or the error value of the eyeball motion amplitudes is less than the preset value (the preset value is prestored in a visual prosthesis).
And after judging that the eyeball motion of the user is matched with the preset motion track, taking the eyeball track information as eyeball control information and sending the eyeball control information to the vehicle. The eyeball control information is eyeball track information matched with the preset motion track.
Step S20, converting the eyeball control information into a corresponding control instruction, and executing a corresponding operation based on the control instruction.
The corresponding relation between the eyeball control information and the vehicle control instruction is preset in the vehicle, the corresponding relation can be stored in a mapping table form, the corresponding control instruction can be obtained by inquiring the corresponding relation mapping table through the eyeball control information, the corresponding relation can also be packaged into a functional module, and the corresponding control instruction can be obtained by calling the corresponding relation module to inquire.
The control instruction can be driving control instructions such as braking, acceleration and deceleration, left turning, right turning and the like, equipment control instructions such as music starting, driving interior lights and the like, and control instructions such as control of the shooting angle of a vehicle camera.
The correspondence between the eyeball control information and the control command may be, for example, eyeball control information corresponding to a "music on" control command obtained by blinking twice in a rapid succession, and eyeball control information corresponding to a "left turn" control command obtained by moving an eyeball to the left and staying for 2 seconds.
The embodiment receives eyeball control information sent by a visual prosthesis, wherein the generation step of the eyeball control information comprises the following steps: detecting an eyeball movement signal of a user through a visual prosthesis, extracting eyeball track information based on the eyeball movement signal of the user through the visual prosthesis, and matching and judging the eyeball track information with a preset movement track; when the eyeball track information is matched with a preset motion track, taking the eyeball track information as eyeball control information; the eyeball control information is converted into a corresponding control instruction, corresponding operation is executed based on the control instruction, namely, information interaction between the visual prosthesis and the vehicle is realized, the eyeball control information sent by the visual prosthesis corresponds to the control instruction, the vehicle is controlled based on the control instruction, and therefore a user can control the vehicle through the visual prosthesis, the intelligence of the visual prosthesis and the vehicle is improved, and the user friendliness of vehicle control is also improved.
Further, a second embodiment of the vision prosthesis based autonomous vehicle control method of the invention is proposed based on the first embodiment.
In a second embodiment of the visual prosthesis-based autonomous vehicle control method of the present invention, step S10 is preceded by:
step S30, determining the current driving mode level of the vehicle;
the visual prosthesis also has different control authority over the vehicle at different levels of autopilot. The vehicle control center can directly acquire the driving mode level setting of the vehicle. The driving mode level may be set autonomously by the user; limited autonomous settings can also be made by the user under the context constraints set by the vehicle system, namely: the vehicle system presets control authorities corresponding to different road condition scenes, for example, for few-people and few-vehicle scenes, a user can select all the control authorities at the same level, and for many-people scenes, the user can only select the control authority at the lower level but not select the control authority at the higher level.
Further, in the present embodiment, step S20 includes:
step S21, generating a corresponding target visual field instruction according to a preset visual field instruction generation rule and the eyeball control information when the vehicle is in a primary automatic driving mode; and adjusting the shooting angle of the visual field camera based on the target visual field instruction.
In this embodiment, the driving mode rank of vehicle includes one-level autopilot mode, and under this driving mode, the autopilot degree of vehicle is high, and the vision prosthesis control authority that corresponds is in lower level, can adjust the shooting angle of field of vision camera according to user's eyeball motion orbit, can issue shooting angle adjustment instruction according to user's eyeball motion, promotes vehicle and vision prosthesis's intellectuality.
Adjusting the shooting angle of the visual field camera is mainly achieved by adjusting the rotation angle of the visual field camera relative to the fixed base.
In one embodiment, the preset visual field instruction generation rule is: and inquiring the corresponding relation between the visual field control instruction in the visual field instruction list and the eyeball track to obtain a target visual field instruction corresponding to the eyeball control information. The visual field instruction list can be in the form of a mapping table, a corresponding target visual field instruction can be obtained by inquiring the mapping table according to the eyeball control information, and the target visual field instruction can also be returned by the visual field instruction module by inputting the eyeball control information into the visual field instruction module storing the visual field instruction list. The visual field instruction list may be preset in the vehicle or may be preset in the visual prosthesis for transmission by the visual prosthesis to the vehicle.
The shooting angle adjustment object of the visual field camera comprises a shooting angle moving direction and an angle. The eyeball motion direction and the eyeball motion amplitude can be obtained by analyzing the eyeball control information.
The moving direction of the shooting angle can be consistent with the moving direction of the eyeballs, for example, in the visual field instruction list, the left turning of the eyeballs corresponds to a control instruction that the visual field camera turns left, the right turning of the eyeballs corresponds to a control instruction that the visual field camera turns right, the up moving of the eyeballs corresponds to a control instruction that the visual field camera moves up, and the down moving of the eyeballs corresponds to a control instruction that the visual field camera moves down; the moving direction of the shooting angle and the direction of the eyeball movement can be inconsistent and can be set by the user or preset by default by a vehicle system. The corresponding relationship between the moving direction of the photographing angle and the moving direction of the eyeball can be stored in a visual field instruction list, and the corresponding relationship can comprise movement control instructions of the photographing angles of the camera, such as up and down, left and right, upper left, lower left, upper right, lower right and the like. The moving size of the shooting angle of the visual field camera can be set to be a fixed size.
In another embodiment, the preset horizon instruction generation rule is: and acquiring the eyeball movement direction and the eyeball movement amplitude from the eyeball control information, generating a corresponding target visual field instruction according to the eyeball movement direction and the eyeball movement amplitude, adjusting the movement direction of the visual field camera to be the eyeball movement direction, and adjusting the movement amplitude of the visual field camera to be the eyeball movement amplitude.
The visual field camera adjusted according to the eyeball control information refers to a current lens of the visual prosthesis, namely a camera to which an image seen by a user through the visual prosthesis belongs.
Further, in step S21, a corresponding target visual field instruction is generated according to a preset visual field instruction generation rule and the eyeball control information; the step of adjusting the shooting angle of the visual field camera based on the target visual field instruction comprises the following steps:
step S211, receiving head action information sent by the visual prosthesis;
when the vehicle is in a first-level automatic driving mode, if the head action information sent by the visual prosthesis is received, the shooting angle of the visual field camera can be adjusted according to the head action information and the eyeball control information.
After the current driving mode level of the vehicle is determined, the current driving mode level of the vehicle can be sent to the visual prosthesis, the visual prosthesis acquires corresponding control information, and when the vehicle is in the first-level automatic driving mode, the visual prosthesis can acquire head action information and eyeball control information and send the head action information and the eyeball control information as the control information to the vehicle.
An acceleration sensor and a gyroscope can be configured on the external device of the visual prosthesis and used for collecting head action information.
The head action can drive the eye part fixed on the head to move, and further the visual field range of the user is influenced, so that the movement of the eye part can be represented by the head action information, the eye movement direction and the eye movement amplitude can be obtained from the head action information, and the shooting angle of the camera is adjusted according to the eye movement direction and the eye movement amplitude.
Step S212, calculating a target change track of a shooting angle of the visual field camera according to the head action information and the eyeball control information;
the vehicle can calculate a target change track of the shooting angle of the visual field camera through the visual field change track desired by the user, and the target change track can be superposed with the visual field change track.
Specifically, step S212 includes:
analyzing the head action information to obtain the eye movement direction and the eye movement amplitude; analyzing the eyeball control information to obtain the eyeball movement direction and the eyeball movement amplitude; and determining the change direction and the change amplitude of the visual field focus of the user based on the eye movement direction and the eye movement amplitude, and the eyeball movement direction and the eyeball movement amplitude, and taking the change direction and the change amplitude of the visual field focus as a target change track of the shooting angle of the visual field camera.
In the present embodiment, the direction and the width of change of the visual field focus of the user are used to indicate the visual field change trajectory desired by the user. The eye movement direction and the eye movement amplitude are a group of eye movement vectors, the eyeball movement direction and the eyeball movement amplitude are a group of eyeball movement vectors, the change direction and the change amplitude of the visual field focus can be obtained by vector addition of the eye movement vectors and the eyeball movement vectors, and the change direction and the change amplitude of the visual field focus are respectively used as the change direction and the change amplitude of the target change track.
And step S213, adjusting the shooting angle of the visual field camera according to the target change track.
And rotating the shooting angle of the visual field camera to the change direction of the target change track, and rotating the shooting angle by an angle corresponding to the change amplitude of the target change track.
According to the embodiment, when the vehicle is in the primary automatic driving mode, the target change track of the shooting angle of the visual field camera is calculated according to the head action information and the eyeball control information by receiving the head action information sent by the visual prosthesis, and the shooting angle of the visual field camera is adjusted according to the target change track, so that the shooting angle of the visual field camera can be changed according to the change of eyes and eyeballs of a user by the vehicle, an image which is consistent with the expected visual field of the user is provided and transmitted to the visual prosthesis, and the intelligence and the user friendliness of the vehicle can be improved.
Further, a third embodiment of the vision prosthesis based autonomous vehicle control method of the present invention is proposed based on the above-described embodiments.
In a third embodiment of the vision prosthesis-based autonomous vehicle control method of the present invention, step S20 further includes:
and step S22, inquiring a preset driving instruction list when the vehicle is in the secondary automatic driving mode, obtaining a target driving instruction corresponding to the eyeball control information, and controlling the vehicle to execute corresponding driving operation based on the target driving instruction.
In this embodiment, the driving mode level of the vehicle includes a secondary automatic driving mode, in this driving mode, the automatic driving degree of the vehicle is low, the corresponding control authority of the visual prosthesis is at a high level, and the driving behavior of the vehicle, including braking, acceleration and deceleration, left-turning, right-turning, and the like, can be controlled according to the eyeball control information sent by the visual prosthesis. Alternatively, a step of determining the current driving mode level of the vehicle may be included before step S10, or the current driving mode level of the vehicle may be determined before step S20 and after step S10.
And inquiring a preset driving instruction list, and obtaining a target driving instruction corresponding to the eyeball control information according to the corresponding relation between the eyeball track and the driving instruction stored in the driving instruction list. Specifically, the method comprises the following steps:
when the eyeball track corresponding to the eyeball control information is a first track, inquiring a preset driving instruction list, obtaining a target driving instruction corresponding to the first track as a braking instruction, and executing braking operation by the vehicle; the first track can be twice from top to bottom, or from top to bottom, and stays below for 2 seconds;
when the eyeball track corresponding to the eyeball control information is a second track, inquiring a preset driving instruction list to obtain that a target driving instruction corresponding to the second track is a left turn instruction, and executing left turn operation by the vehicle; wherein, the second track can be left and last for 3 seconds, or look to the left and down;
when the eyeball track corresponding to the eyeball control information is a third track, inquiring a preset driving instruction list to obtain that a target driving instruction corresponding to the third track is a right turn instruction, and executing a right turn operation by the vehicle; wherein, the third track can be to the right and last for 3 seconds, or looking to the lower right;
when the eyeball track corresponding to the eyeball control information is a fourth track, inquiring a preset driving instruction list, obtaining that a target driving instruction corresponding to the fourth track is a deceleration instruction, and executing deceleration operation by the vehicle, wherein the fourth track can be downward looking and lasts for 2 seconds or 3 seconds.
Optionally, the vehicle acquires a primary image of the preset azimuth angle and sends the primary image to the visual prosthesis when the vehicle is in the secondary automatic driving mode. The main image refers to images in a preset angle range in the front of the vehicle and in the left and right of the vehicle, the preset azimuth angle refers to the preset angle range in the front of the vehicle and in the left and right of the vehicle, and the specific angle of the preset azimuth angle can be preset in the vehicle.
In this embodiment, the target driving instruction corresponding to the eyeball control information is obtained, and the vehicle is controlled to execute the corresponding driving operation based on the target driving instruction, so that the vehicle driving control can be performed according to the eyeball movement of the user, and the intelligence of the vehicle and the visual prosthesis is improved.
Optionally, the step of obtaining the target driving instruction corresponding to the eyeball control information in step S22 is followed by:
step S221, calculating a predicted driving area of the vehicle according to the target driving instruction and the current position of the vehicle;
the area occupied by the vehicle on the road surface is a fixed rectangular area, and the vehicle always performs mechanical motion on a plane taking the road surface as a reference coordinate system, so that the expected driving area can be determined according to the current position of the vehicle and the subsequent driving operation corresponding to the target driving instruction. The predicted travel area refers to a road area where the vehicle is predicted to pass if the vehicle performs driving control according to the target driving command.
And when the subsequent driving operation corresponding to the target driving instruction does not include left and right turns, if the target driving instruction is a pure acceleration and deceleration instruction and the subsequent driving track of the vehicle is a linear track, the predicted driving area of the vehicle is an area right in front of the vehicle. Optionally, whether a lane boundary exists on a road where the vehicle is located is judged based on a road image acquired by the vehicle camera; if the lane boundary exists, the boundary of the lane where the vehicle is located is obtained, and a rectangular area defined by the boundary is the predicted driving area of the vehicle.
When the subsequent driving operation corresponding to the target driving instruction comprises left and right turning, if the target driving instruction is a left turning instruction or a right turning instruction and the subsequent driving track of the vehicle is a curve track, the left/right side or left front/right front area of the vehicle can be used as the predicted driving area of the vehicle until a new target driving instruction is detected, and then the predicted driving area is determined again.
Step S222, acquiring a road condition image on the expected driving area, performing object recognition on the road condition image, and judging whether an obstacle exists on the expected driving area; in step S223, if an obstacle exists in the estimated travel area, an abnormality warning is output.
After the predicted driving area is determined, acquiring a road condition image on the determined predicted driving area through a camera on the vehicle, shooting a continuous image sequence through the camera in front of the vehicle when the predicted driving area is an area right in front of the vehicle, and carrying out object identification based on the image sequence; when the expected driving area is the left/right side or left front/right front area of the vehicle, a continuous image sequence is shot by the left/right side camera of the vehicle, and object recognition is carried out based on the image sequence.
In an embodiment, an obstacle feature library may be preset, and configured to store contour features of various obstacles, when the road condition image is identified, the contour features in the road condition image are extracted and matched with a feature set in the obstacle feature library to determine whether an obstacle exists in the expected driving area, if an obstacle exists in the expected driving area, an abnormal warning is output to alert a user of the existence of the obstacle, and if an obstacle does not exist in the expected driving area, step S221 is continuously performed to monitor the obstacle.
In another embodiment, the object recognition may be performed on the road condition image through a classification model based on a neural network, and the road condition image data may be input into the trained classification model, and the classification model may determine whether there is an obstacle. When a preset classification model is trained, obtaining a training sample, wherein the training sample is road condition image data marked by an obstacle; and extracting the characteristics of the training samples, calculating the optimal model parameters of the classification model through an iterative algorithm according to the characteristics of the training samples, and further training the classification model containing the optimal model parameters.
According to the embodiment, the vehicle driving track is predicted, the obstacle identification is carried out on the road condition image in the expected driving area, when the obstacle exists in the expected driving area, the abnormal warning is output, the driving assistance of the user can be realized, and the driving safety is improved.
Further, in a fourth embodiment based on the above embodiments, the visual prosthesis-based autonomous vehicle control method further includes:
step S40, when the vehicle is in the secondary automatic driving mode, detecting that there are a preset number of feasible roads at a preset distance in the driving direction of the vehicle, and outputting a direction inquiry prompt, wherein the preset number is greater than or equal to two;
in this embodiment, the current driving mode level of the vehicle is determined, and when the vehicle is in the secondary automatic driving mode, the vehicle driving control can be performed according to the eyeball control information.
The predetermined number of available roads at the predetermined distance in the traveling direction of the vehicle may be under the road condition of a fork (e.g., a crossroad). The vehicle can obtain the image in the driving direction and perform road identification on the image to further determine whether the preset distance has the preset number of feasible roads, and the vehicle can also obtain map navigation information through networking and determine whether the preset distance has the preset number of feasible roads in the driving direction according to the navigation information. The preset distance can be set by a user in a self-defined mode or set by default in the vehicle, and can be 100 meters or 200 meters, and the specific number is not limited.
The direction inquiry prompts may be audibly output or displayed on a vehicle user interface, with the user determining the road and direction of travel.
And step S41, acquiring the current latest eyeball control information, determining a target road according to the latest eyeball control information, and driving towards the direction of the target road.
In the embodiments of the present invention, the determination of the corresponding control command is performed based on the latest eye control information, and in the embodiments, the determination of the target road is performed according to the latest eye control information.
Because different feasible roads have different directions, such as a left road, a right road, and a straight road in an intersection, the driving direction selected by the user can be determined by the eye movement direction corresponding to the latest eye control information, and the eye movement direction can be consistent with the driving direction, for example: the vehicle inquires about whether the crossroad of two hundred meters in front of the user turns left or right or goes straight, if the user wants to turn left, the eyes of the user look left for more than three seconds, the vehicle determines the left turn after detecting the eyeball control information, the eyeball movement direction can be inconsistent with the driving direction, and the vehicle can be customized by the user according to own habits or preset in the vehicle.
In the embodiment, when the vehicle is in the secondary automatic driving mode, a preset number of feasible roads at a preset distance in the vehicle driving direction are detected, a direction inquiry prompt is output, the current latest eyeball control information is obtained, the target road is determined according to the latest eyeball control information, the vehicle is driven in the direction of the target road, and a user can select from the feasible roads through a visual prosthesis, so that the intelligent control of the vehicle is realized.
The invention also proposes a storage medium on which a computer program is stored. The storage medium may be the Memory 103 in the autonomous vehicle of fig. 1, or may be at least one of a ROM (Read-Only Memory)/RAM (Random Access Memory), a magnetic disk, and an optical disk, and the storage medium includes several instructions to enable a device (which may be a mobile phone, a computer, a server, a network device, or an autonomous vehicle in the embodiment of the present invention) having a processor to execute the method in the embodiments of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or server that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or server.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A vision prosthesis-based autonomous vehicle control method, characterized by comprising the steps of:
receiving eyeball control information sent by a visual prosthesis, wherein the generation step of the eyeball control information comprises the following steps: detecting an eyeball movement signal of a user through a visual prosthesis, extracting eyeball track information based on the eyeball movement signal of the user through the visual prosthesis, and matching and judging the eyeball track information with a preset movement track; when the eyeball track information is matched with a preset motion track, taking the eyeball track information as eyeball control information;
and converting the eyeball control information into a corresponding control instruction, and executing corresponding operation based on the control instruction.
2. The vision prosthesis-based autonomous vehicle control method of claim 1, wherein the step of detecting the user eye movement signal by the vision prosthesis is preceded by:
determining the current driving mode level of the vehicle;
the step of converting the eyeball control information into a corresponding control instruction and executing a corresponding operation based on the control instruction comprises:
when the vehicle is in one level of automatic driving mode, generating a corresponding target visual field instruction according to a preset visual field instruction generation rule and the eyeball control information;
and adjusting the shooting angle of the visual field camera based on the target visual field instruction.
3. The vision prosthesis-based autonomous vehicle control method of claim 2, wherein the generating of the corresponding target vision prescription according to a preset vision prescription generating rule and the eyeball control information; the step of adjusting the shooting angle of the visual field camera based on the target visual field instruction comprises the following steps:
receiving head action information sent by a visual prosthesis;
calculating a target change track of a shooting angle of a visual field camera according to the head action information and the eyeball control information;
and adjusting the shooting angle of the visual field camera according to the target change track.
4. The vision prosthesis-based autonomous vehicle control method of claim 3, wherein the step of calculating a target variation trajectory of a visual field camera photographing angle based on the head motion information and the eyeball control information comprises:
analyzing the head action information to obtain the eye movement direction and the eye movement amplitude;
analyzing the eyeball control information to obtain the eyeball movement direction and the eyeball movement amplitude;
and determining the change direction and the change amplitude of the visual field focus of the user based on the eye movement direction and the eye movement amplitude, and the eyeball movement direction and the eyeball movement amplitude, and taking the change direction and the change amplitude of the visual field focus as a target change track of the shooting angle of the visual field camera.
5. The vision prosthesis-based autonomous-vehicle control method of claim 2, wherein the converting the eyeball control information into a corresponding control instruction and performing a corresponding operation based on the control instruction comprises:
and when the vehicle is in the automatic driving mode of another level, inquiring a preset driving instruction list, obtaining a target driving instruction corresponding to the eyeball control information, and controlling the vehicle to execute corresponding driving operation based on the target driving instruction.
6. The vision prosthesis-based autonomous-vehicle control method of claim 5, wherein the step of obtaining the target driving instruction corresponding to the eye control information is followed by:
calculating a predicted driving area of the vehicle according to the target driving instruction and the current position of the vehicle;
acquiring a road condition image on the predicted driving area, performing object recognition on the road condition image, and judging whether an obstacle exists on the predicted driving area;
and if the obstacle exists in the expected driving area, outputting an abnormal warning.
7. The vision prosthesis-based autonomous vehicle control method of claim 2, wherein the vision prosthesis-based autonomous vehicle control method further comprises:
when the vehicle is in an automatic driving mode of another level, detecting that a preset number of feasible roads are located at a preset distance in the driving direction of the vehicle, and outputting a direction inquiry prompt, wherein the preset number is greater than or equal to two;
acquiring current latest eyeball control information, determining a target road according to the latest eyeball control information, and driving towards the direction of the target road;
and controlling the vehicle to run towards the target direction.
8. An autonomous vehicle comprising a vehicle control center, a vision camera, a memory, and a vision prosthesis-based autonomous vehicle control program stored in the memory and executable by the vehicle control center, the vision prosthesis-based autonomous vehicle control program, when executed by the vehicle control center, implementing the steps of the vision prosthesis-based autonomous vehicle control method of any one of claims 1 to 7.
9. A visual prosthesis, comprising an in vivo device and an in vitro device wirelessly connected; the vision prosthesis also stores a vision prosthesis based autonomous vehicle control program thereon, which when executed by the vision prosthesis implements the steps of:
detecting an eyeball movement signal of a user, extracting eyeball track information by a visual prosthesis based on the eyeball movement signal of the user, and matching and judging the eyeball track information with a preset movement track; when the eyeball trajectory information matches a preset motion trajectory, the eyeball trajectory information is taken as eyeball control information, and the eyeball control information is sent to the autonomous driving vehicle according to claim 8.
10. A vision prosthesis-based autonomous vehicle control system, comprising the autonomous vehicle of claim 8 and a vision prosthesis in wired or wireless connection with the autonomous vehicle;
the visual prosthesis is used for detecting an eyeball movement signal of a user, extracting eyeball track information based on the eyeball movement signal of the user by the visual prosthesis, and matching and judging the eyeball track information with a preset movement track; when the eyeball track information is matched with a preset motion track, the eyeball track information is used as eyeball control information, and the eyeball control information is sent to a vehicle;
the automatic driving vehicle is used for receiving eyeball control information sent by a visual prosthesis, converting the eyeball control information into a corresponding control instruction and executing corresponding operation based on the control instruction.
CN201811598562.5A 2018-12-25 2018-12-25 Automatic driving vehicle control method and system, automatic driving vehicle and visual prosthesis Active CN109808711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811598562.5A CN109808711B (en) 2018-12-25 2018-12-25 Automatic driving vehicle control method and system, automatic driving vehicle and visual prosthesis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811598562.5A CN109808711B (en) 2018-12-25 2018-12-25 Automatic driving vehicle control method and system, automatic driving vehicle and visual prosthesis

Publications (2)

Publication Number Publication Date
CN109808711A CN109808711A (en) 2019-05-28
CN109808711B true CN109808711B (en) 2020-07-07

Family

ID=66602419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811598562.5A Active CN109808711B (en) 2018-12-25 2018-12-25 Automatic driving vehicle control method and system, automatic driving vehicle and visual prosthesis

Country Status (1)

Country Link
CN (1) CN109808711B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110727269B (en) * 2019-10-09 2023-06-23 陈浩能 Vehicle control method and related product
CN111147743B (en) * 2019-12-30 2021-08-24 维沃移动通信有限公司 Camera control method and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000279435A (en) * 1999-03-29 2000-10-10 Shimadzu Corp Line of sight input type control system for body auxiliary device
US7483751B2 (en) * 2004-06-08 2009-01-27 Second Sight Medical Products, Inc. Automatic fitting for a visual prosthesis
CN101396583B (en) * 2008-10-30 2011-06-29 上海交通大学 Vision prosthesis device based on optical-disc micro-electrode array
CN102906810B (en) * 2010-02-24 2015-03-18 爱普莱克斯控股公司 Augmented reality panorama supporting visually impaired individuals
JP2015504616A (en) * 2011-09-26 2015-02-12 マイクロソフト コーポレーション Video display correction based on sensor input of transmission myopia display
CN102813574B (en) * 2012-08-03 2014-09-10 上海交通大学 Visual prosthesis image acquisition device on basis of eye tracking

Also Published As

Publication number Publication date
CN109808711A (en) 2019-05-28

Similar Documents

Publication Publication Date Title
KR100834577B1 (en) Home intelligent service robot and method capable of searching and following moving of target using stereo vision processing
JP6638852B1 (en) Imaging device, imaging system, imaging method, and imaging program
US9316502B2 (en) Intelligent mobility aid device and method of navigating and providing assistance to a user thereof
CN109808711B (en) Automatic driving vehicle control method and system, automatic driving vehicle and visual prosthesis
JP2003062777A (en) Autonomous acting robot
KR20190078543A (en) Image acqusition device and controlling method thereof
WO2018047392A1 (en) Mobility device and mobility system
US20160037138A1 (en) Dynamic System and Method for Detecting Drowning
CN109685709A (en) A kind of illumination control method and device of intelligent robot
Ali et al. Blind navigation system for visually impaired using windowing-based mean on Microsoft Kinect camera
EP4112372A1 (en) Method and system for driver posture monitoring
CN112148011B (en) Electroencephalogram mobile robot sharing control method under unknown environment
CN107380064A (en) A kind of vehicle-mounted Eye-controlling focus device based on augmented reality
CN111300429A (en) Robot control system, method and readable storage medium
Chowdhury et al. Robust single finger movement detection scheme for real time wheelchair control by physically challenged people
JP4325271B2 (en) Status detection device and status detection system
CN113255560A (en) Target detection system based on image and laser data under automatic driving scene
CN112149473B (en) Iris image acquisition method
CN111736596A (en) Vehicle with gesture control function, gesture control method of vehicle, and storage medium
KR20110118965A (en) Autonomous wheelchair system using gaze recognition
KR102343298B1 (en) Apparatus of recognizing object of vehicle and system of remote parking including the same
WO2019024010A1 (en) Image processing method and system, and intelligent blind aid device
KR102174423B1 (en) Method And Apparatus for Detection of Parking Loss for Automatic Parking
CN114586073A (en) Biometric data capture and analysis using a hybrid sensing system
CN107783652B (en) Method, system and device for realizing virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220809

Address after: No. 64, Suning Avenue, Xuanwu District, Nanjing City, Jiangsu Province, 210000

Patentee after: Nanjing Huamai Robot Technology Co.,Ltd.

Address before: 210023 No. 1 Wenyuan Road, Qixia District, Nanjing City, Jiangsu Province

Patentee before: NANJING NORMAL University

TR01 Transfer of patent right