CN105677206A - System and method for controlling head-up display based on vision - Google Patents
System and method for controlling head-up display based on vision Download PDFInfo
- Publication number
- CN105677206A CN105677206A CN201610013309.3A CN201610013309A CN105677206A CN 105677206 A CN105677206 A CN 105677206A CN 201610013309 A CN201610013309 A CN 201610013309A CN 105677206 A CN105677206 A CN 105677206A
- Authority
- CN
- China
- Prior art keywords
- gesture
- illumination
- instruction
- view data
- hud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a system and method for controlling a head-up display based on vision. The system comprises an image collecting device, an illumination induction device, an illumination source and a processing device, wherein the image collecting device is used for obtaining image data in the driving environment; the illumination induction device is used for obtaining an illumination value in the driving environment; the illumination source is used for providing compensation illumination; and the processing device is used for processing obtained data. The system can intelligently achieve an awakening instruction, a music switching instruction, an incoming call receiving or rejection instruction and a page switching instruction. According to the system, it is preferred to adopt a near-infrared LED to provide illumination in the dark on the conditions of not disturbing the human eyes. The control method is convenient to operate, the instructions are clear and free of ambiguity, an additional device is not needed, and detection and identification of drivers' gestures under all weather and different illumination conditions can be achieved. Preferably, detection and identification of static gestures and dynamic gestures can be supported simultaneously, and an operation function is powerful.
Description
Technical field
The present invention relates to HUD, the HUD particularly to view-based access control model controls system and method.
Background technology
HUD HUD (HeadUpDisplay), is generally use flight supplementary instrument on aircraft. What come back is meant that pilot need not bow just it can be seen that his important information of needing. Because the convenience of HUD and can improve flight safety, airliner also follows up installation one after another. HUD is the principle utilizing optical reflection, important flight relevent information is incident upon above a sheet glass. This what passenger cabin front end, sheet glass position, highly substantially becomes level with the eyes of pilot, and the word of projection and image adjustment, on the distance of focal length infinity, time pilot sees toward front through HUD, will not hamper the running of eyes, and maintenance clearly indicates.
The basic framework of HUD comprises two parts: data processing unit and image display. Data processing unit is the Hou of the data conformity process by system each on aircraft, and the patten transformation according to selecting becomes symbol set in advance, figure or the kenel with word or numeral to export. Signal is processed and is divided into two devices with image output by some product, but is generally all similar working method. Image display is just mounted in passenger cabin front, between pilot and canopy spatially. Image display receives the information from data processing device, is incident upon above glass. Display device and with control panel, it is possible to reconcile or change the image of output.
The HUD of a new generation improvement in image display includes adopting full figure photography (Holographic) display mode, expand the scope of show image, especially the field-of-view angle in increase level, reduce the thickness of support to the restriction in the what visual field and impact, strengthen different luminosity and the display adjustment under external environment, the definition of strengthening image, with coordinating that other optical images export, for example can the aircraft forward image that infrared image camera produces be projected directly on HUD, show with other data fusion, coordinate the use of night vision goggles and adopt chromatic image display data.Improvement on data processing unit includes speed and the efficiency that raising processes, and image is incident upon on the fixing device in passenger cabin front by HUD, and when pilot's rotation head time, these images will his field range away from keyboard. The HUD of a new generation is more suitable for being used on automobile widely.
In the intrinsic notion of people, driving that what should focus on is safety naturally, but universal along with smart mobile phone, cellphone subscribers are independent of facility that mobile phone brings and quick all the time. Phone, note, wechat real-time communication, multimedia use, digital map navigation instrument etc. these, but in " race of bowing " increasing today, mobile phone brings the safety that our facility but strong influence is driven. The vehicle accident of various ways is all owing to car owner causes due to use mobile phone in driving procedure. Automobile vendors come to realise the importance of middle control screen, add vehicle as maximum terminal unit, more allow this block " screen " on car become place contested by all strategists. But the existence of vehicle-mounted middle control screen really to allow driving variable obtain safer, but in real experiences, still have the every drawback on vehicle-mounted middle control screen and inconvenient, still can allow driver distraction.
At present, the control mode of HUD (HUD) mainly has the control modes such as remote controller, voice and gesture. By remote controller control mode disadvantageously, need the extra equipment of configuration just can complete to control, operate unnatural, Consumer's Experience is bad. Voice-operated mode disadvantageously, need first to wake up, then operate, and the easy maloperation when there being people to speak, the accuracy of operational order identification is not high. And gesture operation controls, there is feature easy to operate, that instruction is clear and definite. The gestural control system of existing HUD (HUD) is by launching signal, and receives the signal that different gesture fires back and carry out gesture identification, and the shortcoming of this kind of technical scheme is:
1, sender unit is needed just to can recognise that with receiving device collaborative work;
2, discernible gesture quantity is very limited;
3, when signal transmission power is too small, interfering, signal transmission power is excessive then can be harmful.
Summary of the invention
The technical problem to be solved in the present invention is, adopt easy to operate, instruction is clear and definite unambiguously, without optional equipment HUD control system, detection and the identification of gesture can be carried out under different illumination conditions, allow driver information of checking easily and effectively and communication in the process of safe driving, ensure that safety simultaneously.
Solve above-mentioned technical problem, the invention provides the HUD of a kind of view-based access control model and control system, including:
Image collecting device, for obtaining the view data in driving environment;
Illumination induction installation, for obtaining the illumination value in driving environment;
Illumination source, is used for providing compensation illumination;
Process device, is connected with image collecting device, for according to above-mentioned view data, forming static or dynamic view data, wherein static state or be dynamically the resting state of the kinestate of gesture or gesture; If above-mentioned view data is unsatisfactory for predetermined images of gestures standard, then above-mentioned view data is not stored; If meeting, then export gesture instruction after above-mentioned view data being stored;
It is connected with illumination induction installation, for according to above-mentioned illumination value, it may be judged whether open and compensate illumination; If so, then open and compensate light irradiation and again obtain the view data in driving environment.
Above-mentioned image collecting device includes at least one photographic head, this camera collection forms static image data to two dimension hand-type, carries out exporting after the match is successful with default gesture and wakes instruction, music switching command up, answers or refuse send a telegram here instruction or switching page instruction.
Above-mentioned illumination induction installation, is connected with above-mentioned illumination source, closes more than when setting threshold value at illumination value,
Described illumination source is at illumination value less than, when setting threshold value, controlling described illumination source and be opened for the illumination that image collecting device provides, and above-mentioned illumination source is near-infrared LED.
The HUD of view-based access control model controls system and also includes OBD communication interface and display interface, is connected with above-mentioned process device, for the information read on vehicle CAN and be shown in described display interface in real time.
Present invention also offers the HUD control method of a kind of view-based access control model, comprise the following steps:
Obtain the view data in driving environment;
According to above-mentioned view data, form static or dynamic view data, wherein static or be dynamically the resting state of the kinestate of gesture or gesture;
If above-mentioned view data is unsatisfactory for predetermined images of gestures standard, then above-mentioned view data is not stored; If meeting, then export gesture instruction after above-mentioned view data being stored;
Obtain the illumination value in driving environment;
According to above-mentioned illumination value, it may be judged whether open and compensate illumination; If so, then open and compensate light irradiation and again obtain the view data in driving environment.
The resting state of described gesture is identified based on two dimension hand-type,
After obtaining the two-dimensional signal on described two dimension hand-type, input obtains the first image;
Match cognization is carried out, if finding and the second image of the first described images match, trigger action instruction with default gesture;
Or two gesture resting state based on two dimension gesture be identified follow the trail of.
The resting state of described gesture includes: clench fist, the five fingers open, shears hands,
Or, the gesture motion that gesture and hand exercise are combined.
If carry out match cognization with default gesture, it does not have find, then reacquire the illumination value in driving environment, open according to illumination value and compensate illumination, again carry out two dimension hand-type coupling.
Described operational order includes: wake instruction, navigation instruction up, telephone order, musical instruction, one or more in wechat instruction,
Navigation instruction, including selecting destination to specify with phonetic order, judging whether to hide block up route instructions, navigate sourse instruction, driving road-condition instruction, cyberdog instruction;
Wake instruction up, in order to wake the HUD of dormancy up;
Telephone order, can send telephone order and carry out Voice command and answer, dial number, when called party's number of depositing is more than one time, it is possible to voice selecting;
Musical instruction, according to voice operating, it is possible to play the music of SD card or broadcast station content;
Wechat instruction, can log in wechat by barcode scanning in advance, has whether voice message is reported when receiving wechat, is converted to asked word by phonetic entry, identification during reply.
The kinestate of described gesture judges based on the view data obtained in driving environment and illumination value, adopts structured light technique, light to fly time technology or polygonal imaging technique input gesture instruction.
Beneficial effects of the present invention:
1) owing to using image collecting device as the sensor of HUD gestural control system, image collecting device includes at least one photographic head, this camera collection forms static image data to two dimension hand-type, carries out output after the match is successful with default gesture and wakes instruction, music switching command up, answers or refuse incoming call instruction, switching page instruction.
2) owing to using illumination induction installation and illumination source, it is preferred to use near-infrared LED provides illumination in the dark when not disturbing human eye.
3) due to described process device, it is connected with image collecting device, for according to above-mentioned view data, forms static or dynamic view data, wherein static or be dynamically the resting state of the kinestate of gesture or gesture; If above-mentioned view data is unsatisfactory for predetermined images of gestures standard, then above-mentioned view data is not stored; If meeting, then export gesture instruction after above-mentioned view data being stored; The method that the present invention can detect simultaneously and identify static gesture and dynamic gesture
4) described operational order includes: wake instruction, navigation instruction up, telephone order, musical instruction, one or more in wechat instruction, it is possible to be conveniently carried out above-mentioned command operating by gesture.
5) control method in the present invention is easy to operate, instruction is clear and definite unambiguously, without optional equipment, disclosure satisfy that driver's detection round-the-clock, gesture under different illumination conditions and identification, preferably, supporting the detection identification of static gesture and dynamic gesture, operating function is powerful simultaneously.
Accompanying drawing explanation
Fig. 1 is according to obtaining driving environment gesture instruction flow chart in the HUD control method of an embodiment of the present invention view-based access control model;
Fig. 2 illustrates according to obtaining driving environment illumination value flow chart in the HUD control method of an embodiment of the present invention view-based access control model;
Fig. 3 illustrates gesture instruction schematic flow sheet in Fig. 1;
Fig. 4 illustrates that what the resting state of gesture comprised is embodied as example.
Fig. 5 illustrates another kind of gesture instruction schematic flow sheet in Fig. 1;
Fig. 6 is an embodiment schematic diagram of gesture instruction in Fig. 1;
Fig. 7 illustrates that what the kinestate of gesture comprised is embodied as example.
Fig. 8 controls the structural representation of system according to the HUD of an embodiment of the present invention view-based access control model;
Fig. 9 illustrates the structural representation of a kind of preferred embodiment in Fig. 8.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
Fig. 1 is according to obtaining driving environment gesture instruction flow chart in the HUD control method of an embodiment of the present invention view-based access control model. This flow process starts from step 101, then, in a step 102, obtains the view data in driving environment, and those skilled in the art can understand, it is thus achieved that the view data mode in driving environment includes: photographic head, have the device of camera function. In step 103, according to above-mentioned view data, form static or dynamic view data; The view data wherein recorded can be divided into according to action or the state of driver, static or dynamic. Static view data includes, static gesture view data, Static Human Face view data, static eye image data (essentially consisting in iris identification). Static gesture view data is identified in order to driver all to have operation purpose gesture identification, it is possible to achieve the convenient manipulation in driving procedure. Static Human Face view data is in order to be identified the face of driver, to judge that driver is at present whether for the car owner of this car. Static eye image data, are also mainly the effects playing identification, namely refer to iris identification. The biological identification technology adopting iris identification can improve the convenience of driver identity verification, such as can adopt a special iris scan sensor that Toshiba releases.A near-infrared camera sensing device of Toshiba, code name exists for " T4KE1 " aims at the iris of scanning user, can be used for carrying out identification and checking. Device mainly have employed the CMOS photo-sensitive cell of one piece of standard, and resolution is 2,100,000 pixels, the image of exportable 1080p, 60fps and video. In order to meet the needs of iris scan, it has powerful near infrared spectrum sensitivity. Or, when driver is with contact lenses, everyone exclusive iris being identified as a biological fingerprint, the information of collection and the data of storage contrast, to determine one's identity. Sensor as far as possible near eyes, it is embedded in the optical sensor in the middle of contact lens and can receive the light being reflect off iris, form the image prestored in the middle of image and equipment and contrast.
In step 104, whether above-mentioned view data meets predetermined images of gestures standard, in this step, mainly data in the images of gestures of above-mentioned acquisition is judged, and judges whether to meet predetermined images of gestures. If then entering step 105, if not then entering step 106;
If, in step 105, gesture instruction is exported after above-mentioned view data is stored, the gesture instruction of described output is in order to act on the functional performance of HUD, conventional feature operation can be formed after storage, the frequency simultaneously occurred according to gesture instruction, carries out priority allocation so that it is more quick and accurately that whole gesture operation controls process.
If it is not, in step 106, above-mentioned view data is not stored, if multi-pass operation is all difficult to meet predetermined images of gestures, then need predetermined images of gestures is corrected, if the identification point of images of gestures is corrected by person.
Fig. 2 illustrates that the HUD control method of the view-based access control model in the present embodiment also includes according to obtaining driving environment illumination value flow chart in the HUD control method of an embodiment of the present invention view-based access control model
This flow process starts from step S201, and now step S201 can synchronize perform with above-mentioned step 101, or step S201 is later than step 101 and performs.
Step S202 obtains the illumination value in driving environment, those skilled in the art understand, can adopt light-sensitive element that the illumination value in driving environment is obtained, and set illumination value threshold value, can be brighter according to daylight, namely the ambient lighting in car can be higher than illumination value threshold value, and illumination at night is dark, and namely the ambient lighting in car can be set lower than the rule of illumination value threshold value.
Step S203 is according to above-mentioned illumination value, it may be judged whether opens and compensates illumination, and the effect compensating illumination is in that when night, environment inside car illumination was poor, it is provided that the illumination of compensation, to carry out gesture identification or recognition of face.
If so, then enter step S204 then open compensation light irradiation and again obtain the view data in driving environment. If being judged as needing to open compensates illumination, then after opening compensation illumination, then carry out the acquisition of images of gestures and other image.
Fig. 3 illustrates gesture instruction schematic flow sheet in Fig. 1.
Step S301 starts
The resting state of step S302 gesture is identified based on two dimension hand-type; Two dimension hand-type identification is based on two dimension aspect, needs the two-dimensional signal without depth information as inputting in processing procedure. Just containing two-dimensional signal just as the photograph of gained of taking pictures at ordinary times, the two dimensional image having only to use single photographic head to capture is as input, then passing through the computer vision technique two dimensional image to input to be analyzed, obtaining information, thus realizing gesture identification. Those skilled in the art can understand, computer vision (ComputerVision, CV) in, the treatment technology of visual information depends on image processing method, and it includes the contents such as image enhaucament, data encoding and transmission, smooth, edge sharpening, segmentation, feature extraction, image recognition and understanding.The quality exporting image after these process obtains considerable degree of improvement, has both improved the visual effect of image, has been easy to again computer and image is analyzed, processes and identifies. Such as, obtain two dimension hand-type image by image acquisition, two dimension hand-type image can be carried out image enhaucament, for instance rectangular histogram conversion can be passed through and enhance picture contrast, it is achieved single channel image strengthens, and gradation of image threshold value is stretched to 0-255. Then carrying out data encoding and transmission again, this step is in order to image is compressed, and image smoothing main purpose is in that denoising, removes probably due to environmental factors the image fault that factors such as () vehicle jolts, turns, dark causes. Complete edge sharpening and can adopt " image sharpening method based on rim detection ". Zeng Jialiang. method. After completing above-mentioned operation, feature extraction can be carried out again, it is possible to adopt the linear transformation carried out based on karhunen-Le Wei expansion. Or employing nonlinear mapping method, for instance multidimensional scaling method and parameter mapping method. Eventually complete the identification of image.
After step S303 obtains the two-dimensional signal on described two dimension hand-type, input obtains the first image; Can obtaining two-dimensional signal after obtaining above-mentioned two-dimentional hand-type, the first image obtained is the image after image procossing,
Step S304 carries out match cognization with the gesture preset; The gesture preset includes the images of gestures of all of single, double hands, i.e. right-hand man or single hands.
If step S305 finds and the second image of the first described images match, trigger action instruction; The second described image is the images of gestures prestored, if match cognization success, can trigger corresponding instruction, such as static gesture " 1 ", corresponding instruction can play music, then HUD is controlled by the result according to coupling, plays out music. Such as static gesture " fist ", it is possible to corresponding open navigation feature, then HUD is controlled by the result according to coupling, after completing to wake up, proceeds by navigation.
Or after step S302, the resting state entering S306 two dimension gesture is identified following the trail of based on two dimension gesture. The covering scope of two dimension gesture is more than the scope of two dimension hand-type.
Fig. 4 illustrates that what the resting state of gesture comprised is embodied as example, wherein in the resting state of the gesture of step S401, includes: clench fist, the five fingers open, shears hands, it is possible to adopt the Gesture Recognition of Flutter company. After employing above-mentioned technology, user can control HUD with several hand-types. Not only such as, before palm is raised the photographic head being put in HUD by driver, HUD begins to play music, then before palm is put into photographic head, and music but also stops playing. Adopt above-mentioned based on mode-matching technique, analyze image by computer vision algorithms make, and default image model is compared, it is possible to achieve the resting state identification of gesture. When driver takes driver's seat, it is only necessary to doing a gesture without foundation, HUD just can proceed by broadcasting music, control to be facilitate much than using finger to press.
Step S401 also includes, the gesture motion that gesture and hand exercise are combined, compared to two dimension hand-type identification, the gesture motion that gesture and hand exercise combine belongs to the mode of two dimension gesture identification, gesture and hand exercise can be included, wave with a flouriss, rotate the hand-type of fist, shake " 1 " word gesture.Specifically can adopt PointGrab, EyeSight and the ExtremeReality of Israel. Described two dimension gesture identification has dynamic feature, the motion of gesture can be followed the trail of, and then identify that the compound action that gesture and hand exercise combined is play not only by gesture to control HUD/suspended, it is also possible to realize forward/backward/page up/scroll down through the complex operations of these demand two-dimensional coordinate modification informations. The resting state of independent gesture or the kinestate of independent gesture can be adopted, it is also possible to be that the resting state integrated mode of kinestate+gesture by gesture carries out gesture identification.
Table 1
The resting state of gesture | Telephone order | Musical instruction | Wechat instruction |
The five fingers palm (opens) | × | ○ | × |
A piece forefinger | ○ | × | × |
Clench fist | × | × | ○ |
Shears hands | ○ | × | × |
The five fingers palm (closes up) | ○ | × | × |
" OK " gesture | ○ | ○ | ○ |
Wherein, "○" positive (can activate gesture operation instruction), and "×" negative (does not namely activate gesture operation instruction).
Table 2
The kinestate of gesture | Telephone order | Musical instruction | Wechat instruction |
The five fingers palm (opens) and slides up and down | × | ○ | × |
About a piece forefinger, slide up and down | ○ | ○ | ○ |
Clench fist and horizontally slip | × | × | ○ |
Shears hands horizontally slips | ○ | × | × |
The five fingers palm (closes up) and slides up and down | ○ | × | × |
" OK " gesture horizontally slips | ○ | ○ | ○ |
Table 3
Fig. 5 illustrates another kind of gesture instruction schematic flow sheet in Fig. 1.
Step S301 starts,
The resting state of step S302 gesture is identified based on two dimension hand-type,
Step S304 carries out match cognization with the gesture preset, and then enters S307 if not,
Step S307 reacquires the illumination value in driving environment, opens according to illumination value and compensates illumination, again carries out two dimension hand-type coupling. When illumination deficiency in car, HUD can automatically turn on compensation illumination, and points out driver again to carry out hand-type operation, or when driver does not also make the operation of instruction hand-type, open in advance and compensate illumination, it is to avoid secondary operation. The light path mode of above-mentioned adjustment can adopt the intelligent dimming algorithm research of reaction " physically based deformation illumination variation ". Chen Zhongyin. light modulation algorithm.
Fig. 6 is an embodiment schematic diagram of gesture instruction in Fig. 1.
The resting state of S401 gesture or the kinestate of S601 gesture, broadly fall into the operational order based on HUD: wake instruction, navigation instruction up, telephone order, musical instruction, one or more in wechat instruction, navigation instruction, including selecting destination to specify with phonetic order, judging whether to hide block up route instructions, navigate sourse instruction, driving road-condition instruction, cyberdog instruction. Those skilled in the art can understand, selects destination to specify with phonetic order and includes but not limited to, the directly navigation of destination, or the address inquiring method based on POI. Described navigation carrys out sourse instruction and includes but not limited to, selects the map supply business of the access of map, such as can select Gao De, or Baidu's map accesses. Described driving road-condition instruction includes but not limited to, if the road conditions of traffic congestion, if the traffic signal have red light, taking pictures, if curb parking, front whether can have pedestrian etc. Described cyberdog instruction, includes but not limited to the anti-velocity radar of full range, GPS cyberdog, GPS radar electric Canis familiaris L.. Effect is the existence reminding car owner's electronic eye or velocity radar in advance, can prevent because hypervelocity or imposed a fine and deduct points in violation of rules and regulations, allows what driver had a defence to enjoy Driving. Wake instruction up, in order to wake the HUD of dormancy up;Those skilled in the art can understand, it is possible to gesture is waken up instruction be associated in advance with described, so after making gesture, it is possible to realizes waking up HUD. Telephone order, can send telephone order and carry out Voice command and answer, dial number, when called party's number of depositing is more than one time, it is possible to voice selecting; Such as after mobile phone is connected with HUD bluetooth, during external incoming call, it is possible to either directly through the resting state of gesture, such as " palm ", the also kinestate of gestures available, such as " clench fist and slide to the right ", refuse or accept this logical incoming call. Musical instruction, according to voice operating, it is possible to play the music of SD card or broadcast station content; When such as using broadcast station content, it is possible to either directly through the resting state of gesture, such as " shears hands ", the also kinestate of gestures available, such as " clench fist and slide to the left ", switch music, increase sound etc. Wechat instruction, can log in wechat by barcode scanning in advance, has whether voice message is reported when receiving wechat, is converted to asked word by phonetic entry, identification during reply. Can either directly through the kinestate of gesture, such as " slide to the right forefinger " sends wechat information.
Fig. 7 illustrates that what the kinestate of gesture comprised is embodied as example.
The kinestate of S601 gesture, includes, and adopts structured light technique, light to fly time technology or polygonal imaging technique input gesture instruction. Three-dimension gesture identification, it is necessary to input be the information including the degree of depth, it is possible to identify various hand-type, gesture and action. Compared to first two two dimension Gesture Recognition, three-dimension gesture identification can not only use single common camera again, because single common camera cannot provide depth information. Obtain depth information and need special hardware, mainly have 3 kinds of hardware implementation mode at present in the world. Just can realize three-dimension gesture plus new advanced Software of Computer Vision algorithm to identify. For " structured light (StructureLight) ", the ultimate principle of this technology is, load a laser projecting apparatus, a grating being carved with certain patterns is put outside laser projecting apparatus, can reflect when laser carries out projection imaging by grating, so that the final drop point on a surface of an of laser produces displacement. When object distance laser projecting apparatus is closer time, refraction and the displacement that produces is just less; When object distance farther out time, refraction and the displacement that produces also will become big accordingly. At this moment use a photographic head to detect and gather the pattern projecting on body surface, by the change in displacement of pattern, just can calculate position and the depth information of object with algorithm, and then restore whole three dimensions. For " light flies the time (TimeofFlight) ", the light time of flying is the technology that SoftKinetic company adopts, it is provided that with the three-dimensional camera of gesture identification function. The principle of this technology is to load a light-emitting component, and the photon that light-emitting component sends can reflect after encountering body surface. Use a special cmos sensor to catch these photons being sent by light-emitting component, reflecting from body surface again, just can obtain the flight time of photon. According to photon flight time and then the distance that photon flight can be extrapolated, also just obtain the depth information of object. For " polygonal imaging (Multi-camera) ", it is the Fingo of the product of the same name of LeapMotion company and Usens company.The ultimate principle of this technology is that the photographic head using two or more absorbs image simultaneously, and photographic head can be increased in the selection in display that comes back, by the difference of the image that these different photographic head of comparison obtain at synchronization, algorithm is used to calculate depth information, thus polygonal three-dimensional imaging.
Fig. 8 controls the structural representation of system according to the HUD of an embodiment of the present invention view-based access control model; Described image collecting device 801, for obtaining the view data in driving environment; Those skilled in the art can understand, image collecting device 801 can include one or two photographic head, and described photographic head is coloured image photographic head and/or infrared camera, and image collecting device 801 can be arranged at the positive front end of described HUD805. Described illumination induction installation 802, for obtaining the illumination value in driving environment; Those skilled in the art can understand, illumination induction installation 802 can adopt illumination induction apparatus or light-sensitive element. Described illumination source 803, is used for providing compensation illumination, it is possible to adopts near-infrared LED to provide and compensates illumination. Described process device 804, when being connected with image collecting device, for according to above-mentioned view data, forms static or dynamic view data, wherein static or be dynamically the resting state of the kinestate of gesture or gesture; If above-mentioned view data is unsatisfactory for predetermined images of gestures standard, then above-mentioned view data is not stored; If meeting, then export gesture instruction after above-mentioned view data being stored; When being connected with illumination induction installation, for according to above-mentioned illumination value, it may be judged whether open and compensate illumination; If so, then open and compensate light irradiation and again obtain the view data in driving environment. Described process device 804 can provide disposal ability at a high speed, can adopt ARM4 core processor unit, supports microSD memory card maximum support 32GB simultaneously. HUD805, in order to provide the carrier of said apparatus.
Fig. 9 illustrates the structural representation of a kind of preferred embodiment in Fig. 8, the HUD of the view-based access control model in the present embodiment controls system, HUD includes, image collecting device, illumination induction installation, illumination source and process device, described process device connects with image collecting device, illumination induction installation and illumination source respectively. Wherein, above-mentioned image collecting device includes at least one photographic head, this camera collection forms static image data to two dimension hand-type, carries out exporting after the match is successful with default gesture and wakes instruction, music switching command up, answers or refuse send a telegram here instruction or switching page instruction. Above-mentioned illumination induction installation, being connected with above-mentioned illumination source, close more than when setting threshold value at illumination value, described illumination source is when illumination value is less than setting threshold value, controlling described illumination source and be opened for the illumination that image collecting device provides, above-mentioned illumination source is near-infrared LED 806. Also include OBD communication interface 807 and display interface 808, be connected with above-mentioned process device, for the information read on vehicle CAN and be shown in described display interface in real time. Display interface 808 is connected with vehicle vehicle CAN communication bus, in order to show the state of vehicle in real time.
Those of ordinary skill in the field it is understood that more than; described be only specific embodiments of the invention, be not limited to the present invention, all within the spirit and principles in the present invention; any amendment of being made, equivalent replacement, improvement etc., should be included within protection scope of the present invention.
Claims (10)
1. the HUD of a view-based access control model controls system, it is characterised in that including:
Image collecting device, for obtaining the view data in driving environment;
Illumination induction installation, for obtaining the illumination value in driving environment;
Illumination source, is used for providing compensation illumination;
Process device, is connected with image collecting device, for according to above-mentioned view data, forming static or dynamic view data, wherein static state or be dynamically the resting state of the kinestate of gesture or gesture; If above-mentioned view data is unsatisfactory for predetermined images of gestures standard, then above-mentioned view data is not stored; If meeting, then export gesture instruction after above-mentioned view data being stored;
It is connected with illumination induction installation, for according to above-mentioned illumination value, it may be judged whether open and compensate illumination; If so, then open and compensate light irradiation and again obtain the view data in driving environment.
2. the HUD of view-based access control model according to claim 1 controls system, it is characterized in that, above-mentioned image collecting device includes at least one photographic head, this camera collection forms static image data to two dimension hand-type, carries out exporting after the match is successful with default gesture and wakes instruction, music switching command up, answers or refuse send a telegram here instruction or switching page instruction.
3. the HUD of view-based access control model according to claim 1 controls system, it is characterised in that above-mentioned illumination induction installation, is connected with above-mentioned illumination source, closes more than when setting threshold value at illumination value,
Described illumination source is at illumination value less than, when setting threshold value, controlling described illumination source and be opened for the illumination that image collecting device provides, and above-mentioned illumination source is near-infrared LED.
4. the HUD of view-based access control model according to claim 1 controls system, it is characterised in that also includes OBD communication interface and display interface, is connected with above-mentioned process device, for the information read on vehicle CAN and be shown in described display interface in real time.
5. the HUD control method of a view-based access control model, it is characterised in that comprise the following steps:
Obtain the view data in driving environment;
According to above-mentioned view data, form static or dynamic view data, wherein static or be dynamically the resting state of the kinestate of gesture or gesture;
If above-mentioned view data is unsatisfactory for predetermined images of gestures standard, then above-mentioned view data is not stored; If meeting, then export gesture instruction after above-mentioned view data being stored;
Obtain the illumination value in driving environment;
According to above-mentioned illumination value, it may be judged whether open and compensate illumination; If so, then open and compensate light irradiation and again obtain the view data in driving environment.
6. the HUD control method of view-based access control model according to claim 5, it is characterised in that the resting state of described gesture is identified based on two dimension hand-type,
After obtaining the two-dimensional signal on described two dimension hand-type, input obtains the first image;
Match cognization is carried out, if finding and the second image of the first described images match, trigger action instruction with default gesture;
Or the resting state of two dimension gesture is identified following the trail of based on two dimension gesture.
7. the HUD control method of the view-based access control model according to claim 5 or 6, it is characterised in that the resting state of described gesture includes: clench fist, the five fingers open, shears hands, forefinger open a business in one or more,
Or, the gesture motion that gesture and hand exercise are combined.
8. the HUD control method of view-based access control model according to claim 6, it is characterised in that if carry out match cognization with default gesture, do not find, then reacquire the illumination value in driving environment, open according to illumination value and compensate illumination, again carry out two dimension hand-type coupling.
9. the HUD control method of view-based access control model according to claim 6, it is characterised in that described operational order includes: wake instruction, navigation instruction up, telephone order, musical instruction, one or more in wechat instruction,
Navigation instruction, including selecting destination to specify with phonetic order, judging whether to hide block up route instructions, navigate sourse instruction, driving road-condition instruction, cyberdog instruction;
Wake instruction up, in order to wake the HUD of dormancy up;
Telephone order, can send telephone order and carry out Voice command and answer, dial number, when called party's number of depositing is more than one time, it is possible to voice selecting;
Musical instruction, according to voice operating, it is possible to play the music of SD card or broadcast station content;
Wechat instruction, can log in wechat by barcode scanning in advance, has whether voice message is reported when receiving wechat, converts word to by phonetic entry, identification during reply.
10. the HUD control method of view-based access control model according to claim 5, it is characterized in that, the kinestate of described gesture judges based on the view data obtained in driving environment and illumination value, adopts structured light technique, light to fly time technology or polygonal imaging technique input gesture instruction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610013309.3A CN105677206A (en) | 2016-01-08 | 2016-01-08 | System and method for controlling head-up display based on vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610013309.3A CN105677206A (en) | 2016-01-08 | 2016-01-08 | System and method for controlling head-up display based on vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105677206A true CN105677206A (en) | 2016-06-15 |
Family
ID=56299773
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610013309.3A Pending CN105677206A (en) | 2016-01-08 | 2016-01-08 | System and method for controlling head-up display based on vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105677206A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107678425A (en) * | 2017-08-29 | 2018-02-09 | 南京理工大学 | A kind of car controller based on Kinect gesture identifications |
CN108229345A (en) * | 2017-12-15 | 2018-06-29 | 吉利汽车研究院(宁波)有限公司 | A kind of driver's detecting system |
CN108657070A (en) * | 2018-03-30 | 2018-10-16 | 斑马网络技术有限公司 | Automobile image interactive system and its method |
CN109087636A (en) * | 2017-12-15 | 2018-12-25 | 蔚来汽车有限公司 | Interactive device |
CN112396998A (en) * | 2019-08-14 | 2021-02-23 | 海信视像科技股份有限公司 | Vehicle-mounted display control system, method and device |
CN112558305A (en) * | 2020-12-22 | 2021-03-26 | 华人运通(上海)云计算科技有限公司 | Control method, device and medium for display picture and head-up display control system |
CN112684887A (en) * | 2020-12-28 | 2021-04-20 | 展讯通信(上海)有限公司 | Application device and air gesture recognition method thereof |
CN113978389A (en) * | 2021-10-29 | 2022-01-28 | 北京乐驾科技有限公司 | Detection circuit, system and method for automobile ignition, electronic equipment and storage medium |
US11402900B2 (en) * | 2019-01-07 | 2022-08-02 | Beijing Boe Optoelectronics Technology Co., Ltd. | Augmented reality system comprising an aircraft and control method therefor |
CN118494183A (en) * | 2024-07-12 | 2024-08-16 | 比亚迪股份有限公司 | Vehicle control method, vehicle control device and vehicle |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102707801A (en) * | 2012-05-07 | 2012-10-03 | 广东好帮手电子科技股份有限公司 | Vehicle-mounted recognition control system and control method thereof |
CN102910130A (en) * | 2012-10-24 | 2013-02-06 | 浙江工业大学 | Actually-enhanced driver-assisted early warning system |
CN104238731A (en) * | 2013-06-24 | 2014-12-24 | 惠州市华阳多媒体电子有限公司 | Gesture control system of head-up display and control method of gesture control system |
CN104866106A (en) * | 2015-06-03 | 2015-08-26 | 深圳市光晕网络科技有限公司 | HUD and infrared identification-combined man-machine interactive method and system |
CN104875680A (en) * | 2015-06-03 | 2015-09-02 | 深圳市光晕网络科技有限公司 | HUD (head up display) device combining voice and video recognition |
CN104898848A (en) * | 2015-06-17 | 2015-09-09 | 苏州艾曼纽自动化科技有限公司 | HUD (head up display) system based on DLP (digital light processing) technology |
CN105224088A (en) * | 2015-10-22 | 2016-01-06 | 东华大学 | A kind of manipulation of the body sense based on gesture identification vehicle-mounted flat system and method |
-
2016
- 2016-01-08 CN CN201610013309.3A patent/CN105677206A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102707801A (en) * | 2012-05-07 | 2012-10-03 | 广东好帮手电子科技股份有限公司 | Vehicle-mounted recognition control system and control method thereof |
CN102910130A (en) * | 2012-10-24 | 2013-02-06 | 浙江工业大学 | Actually-enhanced driver-assisted early warning system |
CN104238731A (en) * | 2013-06-24 | 2014-12-24 | 惠州市华阳多媒体电子有限公司 | Gesture control system of head-up display and control method of gesture control system |
CN104866106A (en) * | 2015-06-03 | 2015-08-26 | 深圳市光晕网络科技有限公司 | HUD and infrared identification-combined man-machine interactive method and system |
CN104875680A (en) * | 2015-06-03 | 2015-09-02 | 深圳市光晕网络科技有限公司 | HUD (head up display) device combining voice and video recognition |
CN104898848A (en) * | 2015-06-17 | 2015-09-09 | 苏州艾曼纽自动化科技有限公司 | HUD (head up display) system based on DLP (digital light processing) technology |
CN105224088A (en) * | 2015-10-22 | 2016-01-06 | 东华大学 | A kind of manipulation of the body sense based on gesture identification vehicle-mounted flat system and method |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107678425A (en) * | 2017-08-29 | 2018-02-09 | 南京理工大学 | A kind of car controller based on Kinect gesture identifications |
CN108229345A (en) * | 2017-12-15 | 2018-06-29 | 吉利汽车研究院(宁波)有限公司 | A kind of driver's detecting system |
CN109087636A (en) * | 2017-12-15 | 2018-12-25 | 蔚来汽车有限公司 | Interactive device |
CN108657070A (en) * | 2018-03-30 | 2018-10-16 | 斑马网络技术有限公司 | Automobile image interactive system and its method |
US11402900B2 (en) * | 2019-01-07 | 2022-08-02 | Beijing Boe Optoelectronics Technology Co., Ltd. | Augmented reality system comprising an aircraft and control method therefor |
CN112396998A (en) * | 2019-08-14 | 2021-02-23 | 海信视像科技股份有限公司 | Vehicle-mounted display control system, method and device |
CN112396998B (en) * | 2019-08-14 | 2022-09-16 | 海信视像科技股份有限公司 | Vehicle-mounted display control system, method and device |
CN112558305A (en) * | 2020-12-22 | 2021-03-26 | 华人运通(上海)云计算科技有限公司 | Control method, device and medium for display picture and head-up display control system |
CN112684887A (en) * | 2020-12-28 | 2021-04-20 | 展讯通信(上海)有限公司 | Application device and air gesture recognition method thereof |
CN113978389A (en) * | 2021-10-29 | 2022-01-28 | 北京乐驾科技有限公司 | Detection circuit, system and method for automobile ignition, electronic equipment and storage medium |
CN118494183A (en) * | 2024-07-12 | 2024-08-16 | 比亚迪股份有限公司 | Vehicle control method, vehicle control device and vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105677206A (en) | System and method for controlling head-up display based on vision | |
US11366516B2 (en) | Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device | |
CN205720871U (en) | A kind of intelligence head-up-display system | |
JP2022537923A (en) | VEHICLE DOOR UNLOCK METHOD AND APPARATUS, SYSTEM, VEHICLE, ELECTRONIC DEVICE, AND STORAGE MEDIUM | |
WO2020173155A1 (en) | Vehicle door unlocking method and apparatus, system, vehicle, electronic device and storage medium | |
CN105654753A (en) | Intelligent vehicle-mounted safe driving assistance method and system | |
CN103106401B (en) | Mobile terminal iris recognition device with human-computer interaction mechanism | |
US8831295B2 (en) | Electronic device configured to apply facial recognition based upon reflected infrared illumination and related methods | |
CN105527710A (en) | Intelligent head-up display system | |
CN104866106A (en) | HUD and infrared identification-combined man-machine interactive method and system | |
CN108710833B (en) | User identity authentication method and mobile terminal | |
CN202815718U (en) | Individual carried-with device | |
CN111951157B (en) | Image processing method, device and storage medium | |
WO2021147698A1 (en) | Navigation method for foldable screen, and related apparatuses | |
US20210168279A1 (en) | Document image correction method and apparatus | |
CN106707512B (en) | Low-power consumption intelligent AR system and intelligent AR glasses | |
US20210044762A1 (en) | Apparatus and method for displaying graphic elements according to object | |
KR20180087532A (en) | An acquisition system of distance information in direction signs for vehicle location information and method | |
CN110825216A (en) | Method and system for man-machine interaction of driver during driving | |
CN205721729U (en) | A kind of HUD control system of view-based access control model | |
CN114783067B (en) | Gesture-based recognition method, device and system | |
CN108763894B (en) | User identity authentication method and mobile terminal | |
US20220375172A1 (en) | Contextual visual and voice search from electronic eyewear device | |
WO2022142079A1 (en) | Graphic code display method and apparatus, terminal, and storage medium | |
CN111212228B (en) | Image processing method and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 102208 Beijing city Changping District Huilongguan Longyu street 1 hospital floor A loe Center No. 1 floor 5 Room 518 Applicant after: BEIJING LEJIA TECHNOLOGY CO., LTD. Address before: 100193 Beijing City, northeast of Haidian District, South Road, No. 29, building 3, room 3, room 3558 Applicant before: BEIJING LEJIA TECHNOLOGY CO., LTD. |
|
CB02 | Change of applicant information | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160615 |
|
RJ01 | Rejection of invention patent application after publication |