CN103809772A - Electronic system and relevant method thereof - Google Patents

Electronic system and relevant method thereof Download PDF

Info

Publication number
CN103809772A
CN103809772A CN201210451157.7A CN201210451157A CN103809772A CN 103809772 A CN103809772 A CN 103809772A CN 201210451157 A CN201210451157 A CN 201210451157A CN 103809772 A CN103809772 A CN 103809772A
Authority
CN
China
Prior art keywords
input end
face
mark
image
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210451157.7A
Other languages
Chinese (zh)
Inventor
朱燕宁
阿列克谢·谢尔盖耶维奇·法捷耶夫
张波勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
INGEONIX CORP
Original Assignee
INGEONIX CORP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by INGEONIX CORP filed Critical INGEONIX CORP
Priority to CN201210451157.7A priority Critical patent/CN103809772A/en
Publication of CN103809772A publication Critical patent/CN103809772A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an electronic system, equipment and an embodiment of a relevant operation method. In one embodiment, a calculation system comprises an input module, a sensing module, a calculation module and an analysis module, wherein the input module is used for obtaining an image of an input end from a camera, the input end is a transmission end of a remote controller and is provided with a label around the input end, the sensing module is used for identifying a block of each obtained image corresponding to the label, the calculation module is used for forming a time track of the input end on the basis of the identified block, and the analysis module is used for correlating the formed time track with a calculation command.

Description

Electronic system and correlation technique thereof
Technical field
This technology relates to electronic system and correlation technique thereof, described electronic system and the telepilot co-operating with non-touch input end face.
Background technology
Input equipment provides data and/or control signal to the electronic equipment of computing machine, televisor, game console and other types.In these years, input equipment obtains sizable development from early stage computing machine.For example, early stage computing machine comes from punched tape or film reading out data with punched card reader.Therefore, also very heavy even if produce single input.Recently, developed " modern times " input equipment that improves mouse, touch pad, joystick, motion-sensing game console and the other types of input efficiency.
Even if input equipment has obtained sizable development, but traditional input equipment still can not be provided for the natural mechanism of operating electronic equipment.For example, mouse is widely used as the indicator device for operating computing machine.But user must be interpreted as the planar motion of mouse the planar motion of cursor on graphoscope within.Touch pad on laptop computer is even more difficult operation compared with mouse, and this is because the variation of touch sensitivity and/or limited operating surface.In addition, operation with traditional input equipment typically needs strict gesture, and this can make user's discomfort or even cause disease.
Summary of the invention
According to an aspect of the present invention, provide a kind of computer implemented method, comprising: utilize video camera to obtain the image of input end face, described input end face is the emitting facet of telepilot and has the mark around described input end face; Identify the block in each image obtaining, the block of identification is corresponding to described mark; The block of identifying in the image obtaining based on each, the time locus of formation input end face; By relevant to calculation command the time locus forming; And carry out calculation command by processor.
According to a further aspect in the invention, provide a kind of electronic system, comprising: for obtaining the device of image of input end face, described input end face is the emitting facet of telepilot and has the mark around described input end face; The device that is used for the block of identifying each image obtaining, the block of identification is corresponding to described mark; The block of identifying for the image that obtains based on each and form the device of the time locus of input end face; Be used for the time locus device relevant to calculation command forming; And for carrying out the device of calculation command.
According to a further aspect in the invention, provide a kind of computing system, comprise load module, be configured to obtain from video camera the image of input end face, described input end face is the emitting facet of telepilot and has the mark around described input end face; Sensing module, is configured to identify the block in each image obtaining, and the block of identification is corresponding to described mark; Computing module, the block of identifying in the image that is configured to obtain based on each, the time locus of formation input end face; And analysis module, be configured to relevant to calculation command the time locus forming.
Accompanying drawing explanation
Fig. 1 is according to the schematic diagram of the electronic system of this technology embodiment.
Fig. 2 is according to vertical view, right view, front elevation and the sectional view of the embodiment that is suitable for the telepilot using in Fig. 1 system of this technology embodiment.
Fig. 3 is according to the circuit diagram of the mark of Fig. 2 of this technology embodiment.
Fig. 4 is according to the block diagram of the computing system software module that is suitable for Fig. 1 system of this technology embodiment.
Fig. 5 shows according to the block diagram of the software routines that is suitable for Fig. 4 processing module of this technology embodiment.
Fig. 6 A shows according to the process flow diagram of the data entry device of this technology embodiment.
Fig. 6 B is the process flow diagram illustrating according to the data processing operation that is suitable for Fig. 6 A method of this technology embodiment.
Fig. 7 A shows according to the signal space diagram of the telepilot of this technology embodiment (input end face) and detecting device.
Fig. 7 B shows according to the schematic diagram of the image that is divided into block of input end face in Fig. 7 A of this technology embodiment.
Fig. 8 A-8C has schematically shown according to the relative orientation between the telepilot of this technology embodiment (input end face) and detecting device.
Fig. 8 D-8F has schematically shown respectively the image that is divided into block of input end face in Fig. 8 A-8C.
Fig. 8 G has schematically shown the input plane with respect to detector plane according to this technology embodiment.
Fig. 9 A-9D schematically shown according to this technology embodiment for identifying an example of the action of user to telepilot.
Embodiment
The various embodiment of electronic system, equipment and related operating method thereof are below described.Term " mark " is used in reference to the parts that are used to indicate, identify and/or distinguish in addition carrying and/or be otherwise associated with at least a portion of the object of mark in the text.Term " detecting device " is used in reference to the parts for monitoring, identify and/or confirm in addition mark in the text.For schematic object, the example of mark and the detecting device with customized configuration, parts and/or function is described below.Can also there is other applicable configurations, parts and/or function according to other embodiment of the mark of this technology and/or detecting device.Those skilled in the art it should also be understood that this technology can have additional embodiment, and this technology can be put into practice in the case of there is no some details of the embodiment describing referring to accompanying drawing 1-9D.
Fig. 1 is according to the schematic diagram of the electronic system 100 of this technology embodiment.As shown in Figure 1, electronic system 100 comprises telepilot 101, detecting device 104, output device 106 and is operationally coupled to the controller 118 of above-mentioned parts.Alternatively, electronic system 100 can also comprise light source 112 (for example, fluorescent lamp bulb), and light source 112 is configured to provide illumination 114 to the miscellaneous part of input end face 102 and/or electronic system 100.In other embodiments, light source 112 can omit.In other embodiments, electronic system 100 can also comprise TV tuner, touch screen controller, telephone circuit and/or other applicable parts.
Telepilot 101 comprises input end face 102 (for example, the emitting facet of telepilot 101), and can be configured to not touch output device 106.Input end face 102 can comprise the mark 103 that is configured to send to detecting device 104 signal 110.Mark 103 is arranged around the emitting facet 102 of telepilot 101.In certain embodiments, mark 103 can be active parts.For example, mark 103 can comprise light emitting diode (" LED "), organic light emitting diode (" OLED "), laser diode (" LD "), polymer LED (" PLED "), fluorescent light, infrared (" IR ") transmitter, and/or is configured to launch other applicable illuminators of visible ray, infrared (" IR "), ultraviolet and/or other applicable spectrum.In other embodiments, mark 103 can comprise the radio transmitter that is configured to launch the applicable electromagnetic signal of radio frequency (" RF "), microwave and/or other types.In other examples, mark 103 can comprise the ultrasonic transducer that is configured to launch voice signal.In any above-described embodiment, mark 103 can be coupled to power supply (as shown in Figure 3).The concrete structure of source marking 103 is described in more detail referring to Fig. 2 and Fig. 3.
In other embodiments, mark 103 can comprise non-transformer (, passive) parts.For example, mark 103 can comprise by reflection from transmit 110 reflecting material of at least a portion illumination 114 of optional light source 112.Reflecting material can comprise aluminium foil, catoptron and/or have other applicable materials of enough reflectivity.In other embodiments, mark 103 can include the combination of source block and passive component.In any above-described embodiment, mark 103 can be configured to launch the signal 110 with the pattern adapting with input end face 102.
Detecting device 104 is configured to monitor and catch the signal 110 of launching from the mark 103 of input end face 102.In the following description, for schematic object, for example, for catching the image of input end face 102 and/or the video camera of video (, Logitech of Fremont, the Webcam C500 that the California provides) example as detecting device 104.In other embodiments, detecting device 104 also comprises radio, image and/or the voice capturing parts of IR video camera, laser detector, radio receiver, ultrasonic transducer and/or other applicable types.Even if only show a detecting device 104 in Fig. 1, but in other embodiments, electronic system 100 also can comprise detecting device 104 (not shown) of two, three, four or any other applicable number.
Output device 106 can be configured to provide to user the feedback of text, figure, sound and/or other applicable types.For example, as shown in Figure 1, output device 106 can be to user's Display control computer cursor 108.In the embodiment shown, output device 106 comprises liquid crystal display (" LCD ").In other embodiments, output device 106 can also comprise touch-screen, OLED display and/or other applicable displays.
Controller 118 can comprise the processor 120 that is coupled to storer 122 and input/output interface 124.Processor 120 can comprise microprocessor, field programmable gate array and/or other applicable logical process parts.Storer 122 can comprise and is configured to the data that receive from processor 120 of storage and volatibility and/or non-volatile computer-readable medium (for example, ROM for the instruction of processor 120; RAM, magnetic disk storage medium; Optical storage media; Flash memory device, EEPROM, and/or other applicable nonvolatile storage mediums).In one embodiment, data and instruction can be stored in a computer-readable medium.In other embodiments, data for example can be stored in, in a medium (, RAM), and instruction for example can be stored in, in different medium (, EEPROM).Input/output interface 124 can comprise the driver for being connected with the input-output apparatus interface of video camera, display, touch-screen, keyboard, tracking ball, meter (gauge) or driver plate and/or other applicable types.
In certain embodiments, controller 118 can for example, via hardware communication links (, USB link, ethernet link, RS232 link etc.) operational coupled the miscellaneous part to electronic system 100.In other embodiments, controller 118 can for example, via wireless connections (, WIFI link, Bluetooth link etc.) operational coupled the miscellaneous part to electronic system 100.In other embodiments, controller 118 can be configured to special IC, SOC (system on a chip) circuit, programmable logic controller (PLC) and/or other applicable computing architectures.
In certain embodiments, detecting device 104, output device 106 and controller 118 can be configured to the computing equipment of desk-top computer, laptop computer, panel computer, smart phone and/or other applicable types.In other embodiments, output device 106 can be at least a portion of televisor.Detecting device 104 and/or controller 118 can be integrated in televisor, or separate with televisor.In other embodiments, controller 118 and detecting device 104 can be configured to game console, and output device 106 can comprise TV screen and/or other applicable displays.In additional embodiment, telepilot 101, storage can be configured to external member (kit) for the computer-readable storage medium of the instruction of processor 120 with associated operational order.In other embodiments, telepilot 101, detecting device 104, output device 106 and/or controller 118 can have other applicable configurations.
For example, by swing, rotation and/or mobile remote control device 101, user can carry out operation control 118 with non-touch manner.Electronic system 100 can monitor the motion of telepilot 101, and by this motion to relevant from the calculation command of telepilot 101.Then electronic system 100 can, for example by computer cursor 108 is moved to second place 109b from primary importance 109a, carry out calculation command.It will be appreciated by those skilled in the art that following discussion is only for schematic object.Electronic system 100 can be configured to carry out other except operation discussed below and operate, or carries out other and operate to replace operation discussed below.
In operation, detecting device 104 can start to monitor for the order based on particular preset condition the mark 103 of telepilot 101 (input end face 102).For example, in one embodiment, detecting device 104 can start supervisory signal 110 in the time the signal 110 of launching from mark 103 being detected.In another example, detecting device 104 can determine that signal 110 for example, starts supervisory signal 110 when relatively stable in Preset Time section (, 0.1 second) at detecting device 104.In another example, detecting device 104 can start supervisory signal 110 based on other applicable conditions.
Start to monitor telepilot 101 (input end face 102) afterwards at detecting device 104 for order, processor 120 is sampled to the image of the input end face 102 capturing from detecting device 104 via input/output interface 124.Then processor 120 is caught and is carried out carries out image block (imagesegmentation) with 110 corresponding pixel and/or the image block that transmit in image by identification.Identification can be based on image pixel intensities, primitive shape and/or other applicable parameters.
Then processor 120 identifies the particular characteristics of the image that is divided into block of input end face 102.For example, in one embodiment, processor 120 can be identified the mark 103 of observing by the image based on being divided into block.Processor 120 can also be carried out shape (for example, round rectangle) matching by the image based on being divided into block, and knows the configuration of mark 103.In other examples, processor 120 can be carried out other applicable analyses to being divided into the image of block.
Then processor 120 obtains the predetermined pattern of input end face 102 from storer 122.Predetermined pattern can comprise orientation and/or the location parameter of the input end face 102 calculating based on analytical model.For example, predetermined pattern can comprise other parameters of the visible projection of shape of mark 103 and/or the known plane angle based between input end face 102 and detecting device 104.By the picture characteristics that is divided into block recognizing is compared with the predetermined pattern obtaining, processor 120 can determine with orientation and the current distance of input end face 102 and detecting device at least one.
Then processor 120 for example, repeats aforesaid operations within a period of time (, 0.5 second), and definite orientation and/or distance are accumulated in impact damper or other applicable computer memorys.Based on orientation and/or the distance of multiple time points place accumulative total, then processor 120 can construct the time locus (temporal trajectory) between input end face 102 and detecting device 104.Then processor 120 compares the time locus of structure with the track action model (Fig. 4) of storage in storer 122, to determine swing, rotation, movement and/or other actions of telepilot 101.For example, as shown in Figure 1, processor 120 can determine that the track of structure and the substantial linear of telepilot 101 swing relevant.
Once determine telepilot action (user action), processor 120 can be mapped to definite user action the operation of control and/or other applicable types.For example, in the embodiment of signal, the substantial linear that processor 120 can be mapped to the substantial linear swing of telepilot 101 computer cursor 108 moves.Therefore, processor 120 is to output device 106 output commands, so that computer cursor 108 is moved to second place 109b from primary importance 109a.
Some embodiment of electronic system 100 are by confirming and merging the gesture of conventionally accepting more directly perceived or nature compared with traditional input equipment.For example, the left or right displacement of computer cursor 108 can comprise the left or right displacement of telepilot 101.Equally, some embodiment of electronic system 100 do not need the strict action of user in the time of operation electronic system 100.Replace, user 101 can utilize the input end face 102 on telepilot 101 to operate electronic system 100 with any comfortable posture.In addition, some embodiment of electronic system 100 can be more removable than some traditional input equipments, and this is because remote controller 101 does not need crust or any other support.
Fig. 2 is according to vertical view, right view, front elevation and the sectional view of the embodiment that is adapted at the telepilot 101 using in Fig. 1 electronic system 100 of this technology embodiment.As shown in Figure 2, input end face 102 can comprise the mark 103 around its surperficial circumferential profile.Can utilize fixture, clip, safety pin, clasp, Velcro (Velcro), bonding agent and/or other applicable securing members that mark 103 is fixed to input end face 102, or mark 103 can be pressed and/or rubbed and be installed in input end face 102, and without securing member.Thus, mark 103 around annular or the shape of similar annular of input end face 102.
Fig. 3 is the circuit diagram that is suitable for the above-mentioned mark 103 of discussing with reference to Fig. 2.As shown in Figure 3, in illustrated embodiment, mark 103 is illuminated and is formed by the LED connecting in LED chain, and battery 133 (can share with other assemblies of telepilot 101) is coupled to the two ends of LED chain.In other embodiments, the LED that mark 103 also can be coupled by parallel coupled or in other applicable modes illuminates.Even not shown in Fig. 3, telepilot 101 also can comprise switch, power-supply controller of electric and/or the applicable electricity/mechanical part for powering to mark 103.
Fig. 4 is according to the block diagram of the computing system software module 130 that is suitable for Fig. 1 middle controller 118 of this technology embodiment.Each parts for example, are write as computer program, process, the process of source code or other computer codes according to traditional programming language (, C++ programming language), and provide each parts to carry out for the processor 120 of controller 118.The various implementations of source code and object syllabified code can be stored in storer 122.The software module 130 of controller 118 can comprise load module 132, database module 134, processing module 136, output module 138 and the display module 140 of interconnection each other.
In operation, load module 132 can be accepted data input 150 (for example, from the image of detecting device in Fig. 1 104), and by the extremely miscellaneous part for further processing of the data communication of accepting.Database module 134 is organized the record that comprises action model 142 and action command mapping 144, and contributes to storer 122 these records of storage or obtain these records from storer 122.Can use the data base organization of any type, (for example comprise flat file system, hierarchical data base, relational database or distributed data base, by database supplier (for example, Oracle Corporation, Redwood Shores, California) provide).
Processing module 136 is analyzed the data input 150 from load module 132 and/or other data sources, and the data input 150 of output module 138 based on analyzing produces output signal 152.Processor 120 can comprise display module, for via output device 106 (Fig. 1), monitor, printer and/or other applicable equipment shows, printing or downloading data input 150, output signal 152 and/or other information.Be described in more detail the embodiment of processing module 136 referring to Fig. 5.
Fig. 5 shows the block diagram of the embodiment of the processing module 136 of Fig. 4.As shown in Figure 5, processing module 136 can also comprise sensing module 160, analysis module 162, control module 164 and the computing module 166 of interconnection each other.Each module is write as computer program, process or the routine of source code according to traditional programming language, or one or more module can be hardware module.
Sensing module 160 is configured to receive data input 150, and based on data input 150 marks 103 (Fig. 1) (being known as " image area blocking " here) of identifying on input end face 102 (Fig. 1).For example, in certain embodiments, data input 150 comprises rest image (or frame of video) and the background object (not shown) of input end face 102, telepilot 101 (Fig. 1).Then sensing module 160 can be configured to identify pixel corresponding with the mark 103 of input end face 102 in rest image and/or image block.Based on pixel and/or the image block of identification, sensing module 160 forms the image that is divided into block of input end face 102.
In one embodiment, sensing module 160 comprises the comparison routine that the light intensity value of each pixel is compared with predetermined threshold value.If light intensity is on predetermined threshold value, sensing module 160 can indicate pixel corresponding with mark 103.In another embodiment, sensing module 160 can comprise that shape determines routine, and shape determines that routine is configured to the shape of the pixel in approximate or identification rest image.If the shape being similar to or identify and the preset shape of mark 103 match, sensing module 160 can indicate pixel corresponding with mark 103.
In another embodiment, sensing module 160 can comprise filter routine, and filter routine is configured to identify the pixel with particular color index, crest frequency, average frequency and/or other applicable spectral characteristics.If filtered spectral characteristic is corresponding with the preset value of mark 103, sensing module 160 can indicate pixel corresponding with mark 103.In other embodiments, sensing module 160 can comprise that comparison routine, shape determine at least some the combination in routine, filter routine and/or other applicable routines.
Computing module 166 can comprise the routine that is configured to carry out all kinds and calculates to contribute to operate other modules.For example, computing module 166 can comprise and being configured to along preset direction with the rule time interval the data inputs 150 sampling routines of sampling.In certain embodiments, sampling routine can comprise linearity or non-linear interpolation, extrapolation and/or other applicable subroutines, subroutine is configured to along x, y and/or z direction for example, produce one group of data, image, frame with the rule time interval (, 30 frames are per second) from detecting device 104 (Fig. 1).In other embodiments, sampling routine can be omitted.
Computing module 166 can also comprise and is configured to determine that telepilot 101 (input end face 102) is with respect to the modeling routine of the orientation of detecting device 104.In certain embodiments, modeling routine can comprise the subroutine that is configured to determine and/or calculate the parameter of the image that is divided into block.For example, modeling routine can comprise the subroutine of the shape of determining the mark 103 that is divided into block.
In another example, computing module 166 can also comprise the track routine that is configured to the time locus that forms telepilot 101 (input end face 102).In one embodiment, computing module 166 is configured to calculate and represents that telepilot 101 (input end face 102) moves to the vector of the second place/orientation at the second time point place from the primary importance/orientation of very first time point.In another embodiment, computing module 166 is configured to compute vectors array, or draws the track of telepilot 101 (input end face 102) based on multiple position/orientation at each time point place.In other embodiments, computing module 166 can comprise that linear regression, polynomial regression, interpolation, extrapolation and/or other applicable subroutines derive formula and/or other applicable expression formulas of the motion of telepilot 101 (input end face 102).In other embodiments, computing module 166 can comprise the routine of travel distance, direct of travel, velocity distribution and/or other applicable characteristics of track computing time.In other embodiments, computing module 166 can also comprise that counter, timer and/or other applicable routines contribute to the operation of other modules.
Analysis module 162 can be configured to the time locus of the telepilot 101 (input end face 102) to calculating and analyze, to determine corresponding user action.In certain embodiments, analysis module 162 is analyzed the characteristic of the time locus calculating, and characteristic and action model 142 are compared.For example, in one embodiment, analysis module 162 can be compared the applicable characteristic of the travel distance of time locus, direct of travel, velocity distribution and/or other types with the known action in action model 142.If the coupling of finding, analysis module 166 is configured to specific user action or the gesture of indication identification.
Analysis module 162 can also be configured to the user action (telepilot action) of identification relevant to the control action based on action command mapping 144.For example, if the user action transverse shifting from left to right of identification, analysis module 162 can be by relevant to the horizontal cursor of displacement from left to right action, as shown in Figure 1.In other embodiments, analysis module 162 can be relevant to any applicable order and/or data input by various user actions (telepilot action).
Control module 164 can be configured to the order identified based on analysis module 162 and/or data input controls the operation of controller 118 (Fig. 1).For example, in one embodiment, control module 164 can comprise application programming interfaces (" the API ") controller for being connected with operating system and/or the application programming interfaces of controller 118.In other embodiments, control module 164 (for example can comprise feedback routine, proportional integral or proportional integral difference routine), order and/or the input data of feedback routine based on identification produce one of output signal 152 (for example, the control signal of cursor movement) to output module 138.In another example, control module 164 can based on operator input 154 and/or other applicable inputs carry out other applicable control operations.Then display module 140 can receive definite order and produce corresponding output (Fig. 1) to user.
Fig. 6 A shows according to the process flow diagram of the method 200 of the non-touch operation of the electronic system of this technology embodiment.Although carry out describing method 200 referring to the electronic system 100 of Fig. 1 and the software module of Figure 4 and 5, method 200 can also be applied in the other system with additional and/or different hardware/software part.
As shown in Figure 6A, method 200 step 202 comprises from detecting device 104 (Fig. 1) and obtains data input.In one embodiment, obtain data input and be included in the picture frame of catching input end face 102 (Fig. 1) in background.Each frame can comprise two dimension or three-dimensional multiple pixels (for example, 1280 × 1024).In other embodiments, obtain input data and can comprise the signal that obtains radio, laser, ultrasonic and/or other applicable types.
Another step 204 of method 200 comprises that the input data to obtaining process, to identify the time locus of input end face 102.In one embodiment, the time locus of identifying comprises the vector of the movement that represents input end face 102.In other embodiments, the time locus of identification comprises the vector array of input end face 102.In other embodiments, the movement of identification can comprise other applicable expressions of input end face 102.Referring to Fig. 6 B, some embodiment that the input data to obtaining are processed are described in more detail.
Then method 200 comprises and determines whether that enough data are available determination step 206.In one embodiment, if the input data of processing exceed predetermined threshold value, indicate enough data.In another embodiment, for example, after Preset Time section (, the 0.5 second) past, indicate enough data.In other embodiments, can indicate enough data based on other applicable criterions.If do not indicate enough data, processing is back to step 202 place and obtains detection signal; Otherwise at step 208 place, the time locus that processing advances to the input end face 102 based on identification carrys out interpreting user action.
In certain embodiments, interpreting user action comprises analysis and the characteristic of time locus and known users action is compared.For example, can calculating location, change in location, transverse shifting, vertical mobile, translational speed and/or other times rail track feature, and compare with predetermined action model.Based on the comparison, user action can instruction time the characteristic of track whether mate with those characteristics in action model.After a while, illustrate in greater detail the example of interpreting user action with reference to Fig. 9 A-9D.
Method 200 also comprises another step 210 that the user action of identification is mapped to order.Then method 200 comprises the determination step 212 of determining whether processing continues.In one embodiment, if another that input end face 102 detected moves, process and continue.In other embodiments, can continue to process based on other applicable criterions.Continue if processed, processing is got back to step 202 place and is obtained sensor reading; Otherwise processing finishes.
Fig. 6 B shows according to the process flow diagram of the signal processing method 204 of the method that is suitable for Fig. 6 A 200 of this technology embodiment.As shown in Figure 6B, method 204 step 220 comprises that the detector signal to obtaining carries out image area blocking with identification pixel and/or the image block corresponding with mark 103 (Fig. 1).With reference to Fig. 5, the technology for identifying this pixel is described above.After a while, with reference to Fig. 7 A-7B example of Description Image block in more detail.
Another step 221 of method 204 comprises that the image to being divided into block carries out modeling to determine that input end face 102 (Fig. 1) is with respect at least one in orientation and the position of detecting device 104 (Fig. 1).In one embodiment, image modeling comprises identification and the characteristic of the image that is divided into block is compared with predetermined action model.This specific character can comprise projection of shape and/or other applicable characteristics of mark 103.In other embodiments, image modeling can comprise combination and/or other applicable technology of above-mentioned technology.Comparison between characteristic in characteristic and the pattern of tile images based on identification, can determine orientation and/or the position of telepilot 101 (input end face 102).After a while, be described in more detail the example of image modeling with reference to Fig. 8 A-8G.
Alternatively, processing can also comprise the image sampling at step 222 place.In one embodiment, by application linear interpolation, extrapolation and/or other applicable technology, the iconic model of the input data of obtaining is sampled with the rule time interval along x, y or z direction.In other embodiments, with other applicable time intervals, the image modeling of the detector signal obtaining is sampled.In other embodiments, image sampling step 222 can be omitted.Another step 224 of processing can comprise the time locus of structure input end face 102 (Fig. 1).With reference to Fig. 5, the technology for constructing time locus is described above.After having constructed time locus, process the method 200 that is back to Fig. 6 A.
Fig. 7 A-9D has schematically shown some aspects of the above method 200 with reference to Fig. 6 A and 6B description.Fig. 7 A shows telepilot 101 (input end face 102) according to this technology embodiment and the signal space diagram of detecting device 104.As shown in Figure 7 A, detecting device 104 has two-dimentional viewing areas 170, and input end face 102 comprises having center c jand mobile vector
Figure DEST_PATH_GDA00002654225500121
mark 103.As mentioned above, mark 103 transmits 110 to detecting device 104.As response, detecting device 104 obtains the picture frame F of input end face 102 i(x, y).
Then, the image of the input end face 102 obtaining is divided into block, to identify the pixel corresponding with mark 103 or image block
Figure DEST_PATH_GDA00002654225500122
fig. 7 B shows the schematic diagram of the image that is divided into block of input end face 102.As shown in Figure 7 B, the image 172 that is divided into block can be modeled as to round rectangle ring 174 (for the sake of clarity, tiltedly to rule and to illustrate), and can identify orientation based on it.
Fig. 8 A-8G shows for determining a kind of example technique of telepilot 101 (input end face 102) with respect to the orientation of detecting device 104 and/or the image modeling of position.Fig. 8 A-8C has schematically shown according to three kinds of relative orientations between the input end face 102 of this technology embodiment and detecting device 104.As shown in Fig. 8 A-8C, input end face 102 has input plane 175, and detecting device 104 has detector plane 177.Fig. 8 A shows the input plane 175 that is roughly parallel to detector plane 177.Fig. 8 B shows with respect to the angled input plane 175 of detector plane 177.Fig. 8 C shows the input plane 175 that is approximately perpendicular to detector plane 177.
Fig. 8 D-8F has schematically shown respectively the image that is divided into block of input end face 102 in Fig. 8 A-8C.Different orientations can cause mark 103 different visible shape for detecting device 104.For example, as shown in Fig. 8 D, in the time that input plane 175 is roughly parallel to detector plane 177, mark 103 orthogonal projection, obtain substantially laterally zygomorphic round rectangle ring.As shown in Fig. 8 E, in the time that input plane 175 and detector plane 177 are angled, mark 103 lateral projections, obtain upper and lower asymmetric round rectangle ring (or, due to the part of mark 103 by telepilot 101 part in input end face 102 cover completely, the bottom of round rectangle ring becomes imperfect, or, obtain the projection of similar rectangle).As shown in Figure 8 F, in the time that input plane 175 is approximately perpendicular to detector plane 177, mark 103 vertical projections, obtain half round rectangle projection.
Fig. 8 G has schematically shown the input plane 175 with respect to detector plane 177 according to this technology embodiment.As shown in Fig. 8 G, input plane 175 is limited by an ABEF, and detector plane is limited by an AHGC.The in the situation that of bound by theory not, believe that input plane 175 can be specified by the first angle EBD and the second angle B AC with respect to the orientation of detector plane 177.Believe can be based on input end face 102 (mark 103) known geometry, according to the projection of shape A={0 of mark 103, α 1...., α n, π: α i< α i+ 1} calculates angle (EBD) and probable value (BAC).As a result, for example, for angle (EBD) and every kind of combination (BAC), can calculate mark 103 corresponding pairs distance set and be stored in storer 122 (Fig. 4).
Described in Fig. 6 A and 6B, then can will adjust the distance and compare with angle set A and corresponding being predetermined to be from the paired distance of the image calculation that is divided into block.Based on the comparison, can and (BAC) be estimated as the element of set A by angle (EBD), these elements match with the paired distance of the image calculation from being divided into block in fact.In certain embodiments, for example, the paired distance of calculating and predetermined paired distance can be normalized to maximum distance in pairs.In other embodiments, such normalization can be omitted.Once determine the orientation of input plane 175, can for example, by the input end face of telepilot 101 102 (, Cong Qi center) to the distance estimations of detecting device 104 be just
B=D *bi/di
Wherein bi is the distance of observing between two mark projections; And di is two preset distances between mark projection.
Can repeat aforesaid operations to form the time locus that can be interpreted as particular command and/or data input.Fig. 9 A-9D schematically shown according to this technology embodiment for identifying user action and by user action and the relevant example of order.As shown in Figure 9 A, the movement of telepilot 101 (input end face 102) comprises the roughly forward direction track 180 in y-z plane and rear to track 182.As shown in Figure 9 B, in Fig. 9 A, the first characteristic of time locus is, forward direction track and backward track have the travel distance that exceedes distance threshold 184.Equally, as shown in Figure 9 C, in Fig. 9 A, the second characteristic of time locus is, along the distance of x axle, lower than predetermined threshold value, indication can be ignored relatively along the movement of x axle.In addition, as shown in Fig. 9 D, the 3rd characteristic of time locus is, the speed at the center of the input end face 102 (Fig. 9 A) of telepilot 101 exceedes preset negative threshold speed when mobile towards detecting device 104 (Fig. 9 A), and in the time moving away from detecting device 104, exceedes positive threshold speed.
In one embodiment, if identify all first, second, and third characteristics of time locus, the order that user action can be confirmed to be and click, select, double-click and/or other are applicable.In other embodiments, only some in first, second, and third characteristic can be for relevant to order.In other embodiments, at least one in these characteristics can combine for the characteristic applicable with other, with relevant to order.
According to above, it should be understood that here and described embodiment more of the present disclosure for illustrative purpose, but can carry out various modifications not deviating under prerequisite of the present disclosure.In addition, the many elements in an embodiment can combine with the element in other embodiment, or replace the element of other embodiment.Therefore, this technology is not limited to claims content in addition.

Claims (19)

1. a computer implemented method, comprising:
Utilize video camera to obtain the image of input end face, described input end face is the emitting facet of telepilot and has the mark around described input end face;
Identify the block in each image obtaining, the block of identification is corresponding to described mark;
The block of identifying in the image obtaining based on each, the time locus of formation input end face;
By relevant to calculation command the time locus forming; And
Carry out calculation command by processor.
2. method according to claim 1, wherein, the image that obtains input end face comprises: utilize the video camera that is coupled to processor to obtain multiple frames of input end face.
3. method according to claim 1, wherein, identification block comprises:
The intensity level of the pixel of the image that each is obtained is compared with predetermined threshold value; And
If the intensity level of pixel is greater than predetermined threshold value, indicate this pixel at least a portion corresponding to mark.
4. method according to claim 1, wherein, identification block comprises:
The shape and/or the range of size that in the image that each is obtained, are divided into the pixel of block are compared respectively with preset shape and/or range of size; And
If be divided into shape and/or range of size approximate match preset shape and/or the range of size respectively of the pixel of block, indicate pixel at least a portion corresponding to mark.
5. method according to claim 1, also comprises: for each image obtaining, analyze the block of identification, determine the orientation of input end face with the shape based on mark on input end face.
6. method according to claim 5, also comprises: the input end face orientation based on definite, calculate the distance of input end face to video camera.
7. method according to claim 1, wherein, formation time track comprises: identify over time orientation and the position of input end face, and described method also comprises: the characteristic based on time locus is identified user action.
8. method according to claim 1, wherein, formation time track comprises: orientation and the position of identifying over time input end face, and described method also comprises: the characteristic based on time locus is identified user action, and described characteristic comprises at least one in travel distance, direct of travel, speed, speed and direction reversion.
9. an electronic system, comprising:
The device that is used for the image that obtains input end face, described input end face is the emitting facet of telepilot and has the mark around described input end face;
The device that is used for the block of identifying each image obtaining, the block of identification is corresponding to described mark;
The block of identifying for the image that obtains based on each and form the device of the time locus of input end face;
Be used for the time locus device relevant to calculation command forming; And
For carrying out the device of calculation command.
10. electronic system according to claim 9, wherein, signal mode comprises the signal of multiple separation, and comprises for the device of analytic signal pattern: the device of identifying the projection of shape of the input data mark receiving for the signal based on separating.
11. electronic systems according to claim 10, wherein, also comprise for the device of analytic signal pattern:
Determine at least one the device of input end face with respect to the orientation of detecting device and position for layout shape based on mark on input end face and input end face with respect to the projection of shape of detecting device.
12. electronic systems according to claim 9, wherein, also comprise for the device of identifying calculation command:
For repeat to receive and analysis operation to obtain at least one the device with respect to the orientation of detecting device and position of input end face over time; And
For by input end face over time with respect to the orientation of detecting device and at least one device relevant to calculation command of position.
13. electronic systems according to claim 9, wherein, also comprise for the device of identifying calculation command:
For repeat to receive and analysis operation to obtain at least one the device with respect to the orientation of detecting device and position of input end face over time;
For based on input end face over time with respect to the orientation of detecting device and at least one of position, determine at least one the device in travel distance, direct of travel, speed, speed and the direction reversion of input end face; And
Be used at least one device relevant to calculation command that enters distance, direct of travel, speed, speed and direction reversion of determining.
14. 1 kinds of computing systems, comprising:
Load module, is configured to obtain from video camera the image of input end face, and described input end face is the emitting facet of telepilot and has the mark around described input end face;
Sensing module, is configured to identify the block in each image obtaining, and the block of identification is corresponding to described mark;
Computing module, the block of identifying in the image that is configured to obtain based on each, the time locus of formation input end face; And
Analysis module, is configured to relevant to calculation command the time locus forming.
15. computing systems according to claim 14, wherein, sensing module is configured to:
The intensity level of the pixel of the image that each is obtained is compared with predetermined threshold value; And
If the intensity level of pixel is greater than predetermined threshold value, indicate this pixel at least a portion corresponding to mark.
16. computing systems according to claim 14, wherein, sensing module is configured to:
In the image that each is obtained, the shape of pixel is compared with preset shape; And
If the shape approximate match preset shape of pixel, indicates pixel corresponding to mark.
17. computing systems according to claim 14, wherein, computing module is also configured to: the layout based on mark on input end face is determined the orientation of input end face.
18. computing systems according to claim 17, wherein, computing module is also configured to: the input end face orientation based on definite, calculate the distance of input end face to video camera.
19. computing systems according to claim 14, wherein, computing module is also configured to: the time locus of identification input end face, and analysis module is also configured to, characteristic based on time locus is identified user action, and described characteristic comprises at least one in travel distance, direct of travel, speed, speed and direction reversion.
CN201210451157.7A 2012-11-12 2012-11-12 Electronic system and relevant method thereof Pending CN103809772A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210451157.7A CN103809772A (en) 2012-11-12 2012-11-12 Electronic system and relevant method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210451157.7A CN103809772A (en) 2012-11-12 2012-11-12 Electronic system and relevant method thereof

Publications (1)

Publication Number Publication Date
CN103809772A true CN103809772A (en) 2014-05-21

Family

ID=50706659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210451157.7A Pending CN103809772A (en) 2012-11-12 2012-11-12 Electronic system and relevant method thereof

Country Status (1)

Country Link
CN (1) CN103809772A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101729808A (en) * 2008-10-14 2010-06-09 Tcl集团股份有限公司 Remote control method for television and system for remotely controlling television by same
CN102682589A (en) * 2012-01-09 2012-09-19 西安智意能电子科技有限公司 System for distant control of controlled device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101729808A (en) * 2008-10-14 2010-06-09 Tcl集团股份有限公司 Remote control method for television and system for remotely controlling television by same
CN102682589A (en) * 2012-01-09 2012-09-19 西安智意能电子科技有限公司 System for distant control of controlled device

Similar Documents

Publication Publication Date Title
CN102736733B (en) There is electronic system and the correlation technique thereof of non-tactile input equipment
US9934609B2 (en) Predictive information for free space gesture control and communication
CN109716378B (en) Information collection system, electronic shelf label, electronic POP advertisement device and character information display device
US11455044B2 (en) Motion detection system having two motion detecting sub-system
US9857876B2 (en) Non-linear motion capture using Frenet-Serret frames
CN102027317B (en) Combined object capturing system and display device and associated method
CN101663637B (en) Touch screen system with hover and click input methods
US20130194173A1 (en) Touch free control of electronic systems and associated methods
US10296772B2 (en) Biometric enrollment using a display
CN102016764A (en) Interactive input system and pen tool therefor
CN105229582A (en) Based on the gestures detection of Proximity Sensor and imageing sensor
CN103324277A (en) Touch free user input recognition
CN104052950A (en) Manipulation detection apparatus and manipulation detection method
US20100225588A1 (en) Methods And Systems For Optical Detection Of Gestures
KR20110005737A (en) Interactive input system with optical bezel
CN105579929A (en) Gesture based human computer interaction
CN104395856A (en) Computer implemented method and system for recognizing gestures
CN103135753A (en) Gesture input method and system
KR20130055119A (en) Apparatus for touching a projection of 3d images on an infrared screen using single-infrared camera
KR101749070B1 (en) Apparatus and method for assessing user interface
TWI534687B (en) Optical touch detection system and object analyzation method thereof
CN102508549A (en) Three-dimensional-movement-based non-contact operation method and system
US20130127704A1 (en) Spatial touch apparatus using single infrared camera
CN104142739A (en) Laser point tracking system and method based on optical mouse sensing array
US20170344068A1 (en) Motion control assembly with battery pack

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140521