Invention content
For this purpose, the present invention provides a kind of driving condition detection method, mobile terminal and storage medium, to try hard to solve or
Person at least alleviates existing at least one problem above.
According to an aspect of the invention, there is provided a kind of driving condition detection method, executes in the terminal, it is mobile
Terminal includes infrared photography module, is suitable for the face image of continuous acquisition user, generates sequence of frames of video, and this method includes:Often
A video frame is collected, eyes are positioned from the video frame and extracts eye pattern;Eyelid segmentation and rainbow are carried out to the eye pattern of extraction
Film is divided;Eyes aperture is calculated according to segmentation result, wherein the figure and iris that eyes aperture is surrounded by upper palpebra inferior curve
The area of the lap for the figure that outer profile curve is surrounded, the ratio of the area of the figure surrounded with iris outer profile curve
Value;Judge whether eyes are closed according to eyes aperture;And according to the continuous video of the predetermined quantity including current video frame
In frame, eyes are the accounting of the video frame of closed state, judge whether user is fatigue driving.
The program judges the closed state of eyes with two-dimensional relative opening degree, not by the small shadow of individual eyes nature aperture
It rings, is judged in conjunction with continuous multiple frames, the accuracy of fatigue state detection can be improved.
Optionally, this method further includes:It is opposite with iris center to the center of the flare of infrared light based on pupil
Position calculates the size at visual angle, and the direction at visual angle is that the direction at iris center is directed toward at the center of flare;Judged according to visual angle
Whether eyes face front;And according in the continuous video frame of the predetermined quantity including current video frame, eyes are not faced
The accounting of the video frame in front, judges whether user's attention is concentrated.
Optionally, this method further includes:It will be stored in cycle according to eyes closed state and visual angle determined by video frame
In chained list, so that the accounting and/or eyes that count the video frame that wherein eyes are closed state do not face the video frame in front
Accounting.
Optionally, this method further includes:After judging user for fatigue driving, the first warning information is sent out, prompts user
It is currently at fatigue driving state;And/or after judging that user is absent minded, the second warning information is sent out, is prompted
User focuses on.
Optionally, the position of eyes of user is determined to the position of the flare of infrared light based on pupil in video frame;With
And the position according to identified eyes, to the video frame extraction eye pattern.
Optionally, the intensity profile based on flare Yu pupil iris region, it is anti-by being determined to video frame binaryzation
Penetrate the zone of saturation of hot spot;Zone of saturation is filtered using horizontal gradient filter and vertical gradient filter, determines reflected light
The position of spot;And the position based on flare, determine the position of eyes of user.
Optionally, based on upper gradient between palpebra inferior and sclera on vertical direction, pass through palpebra inferior on fitting of a polynomial
Curve;Based on the gradient at left and right sides of iris in horizontal direction between sclera, iris outer profile curve is fitted by circle.
Optionally, this method further includes:Eyes aperture of the user under non-fatigue state is obtained, as initial opening.
Optionally, it is based on initial opening, determines predetermined threshold, the predetermined threshold is multiplying for initial opening and pre-determined factor
Product;In the case where eyes aperture is less than predetermined threshold, judge eyes for closed state.
Optionally, the radius at the distance between the center based on flare and iris center and iris calculates visual angle.
Optionally, visual angle is calculated by following formula:
Wherein, α is visual angle, and R is iris radius, and d is the distance between flare center and iris center.
According to a further aspect of the present invention, a kind of mobile terminal, including one or more processors are provided;And storage
Device;One or more programs, wherein one or more of programs are stored in the memory and are configured as by described one
A or multiple processors execute, and one or more of programs include the instruction for executing driving condition detection method.
According to a further aspect of the present invention, a kind of computer readable storage medium of the one or more programs of storage is provided,
One or more of programs include instruction, and described instruction is when mobile terminal execution so that the mobile terminal execution drives
Condition detection method.
Said program acquires continuous videos frame sequence using infrared photography module, carries out recognition of face and eyes positioning, leads to
It crosses and eyelid segmentation and the upper palpebra inferior curve of iris segmentation determination and iris outer profile curve is carried out to eye pattern, then according to eyes
Aperture and visual angle judge eyes closed state and whether face front, the eyes closed state accounting of comprehensive analysis continuous multiple frames and
The accounting for not facing front, judges driving condition and provides warning information.Of people is considered when calculating eyes aperture
Body difference, the region area of above the surrounded figure of palpebra inferior curve and the lap of the surrounded figure of iris outer profile curve
For ratio with the surrounded graphics area of iris outer profile curve as eyes aperture, this method does not need additional device, only needs
It wants that application program execution corresponding instruction is installed in mobile terminal, driving condition detection method is general, easy, can improve driving
The accuracy of state-detection.
Specific implementation mode
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Completely it is communicated to those skilled in the art.
Common fatigue monitoring system infers driver using facial characteristics, eye signal, head movement of driver etc.
Fatigue state, and send out early warning.The present invention is based on the mobile terminals of acquisition infrared image to carry out driving condition detection, example
Such as use the mobile phone with active infrared illumination, obtain the image of eye areas, between the above palpebra inferior area of iris region with
The ratio of entire iris area is not influenced by individual eyes aperture difference as eyes aperture, can easily and effectively analyze and drive
The state for the person of sailing reminds the driver in fatigue state.
Fig. 1 shows the structure diagram of mobile terminal 100 according to an embodiment of the invention.Mobile terminal 100 can be with
Including memory interface 102, one or more data processors, image processor and/or central processing unit 104, display screen
Curtain (not shown in figure 1) and peripheral interface 106.
Memory interface 102, one or more processors 104 and/or peripheral interface 106 either discrete component,
It can be integrated in one or more integrated circuits.In the mobile terminal 100, various elements can pass through one or more communication
Bus or signal wire couple.Sensor, equipment and subsystem may be coupled to peripheral interface 106, a variety of to help to realize
Function.
For example, motion sensor 110, light sensor 112 and range sensor 114 may be coupled to peripheral interface 106,
To facilitate the functions such as orientation, illumination and ranging.Other sensors 116 can equally be connected with peripheral interface 106, such as positioning system
System (such as GPS receiver), temperature sensor, biometric sensor or other sensor devices, it is possible thereby to help to implement phase
The function of pass.
Camera sub-system 120 and optical sensor 122 can be used for the camera of convenient such as recording photograph and video clipping
The realization of function, wherein the camera sub-system and optical sensor for example can be charge coupling device (CCD) or complementary gold
Belong to oxide semiconductor (centimetre OS) optical sensor.Reality can be helped by one or more radio communication subsystems 124
Existing communication function, wherein radio communication subsystem may include that radio-frequency transmitter and transmitter and/or light (such as infrared) receive
Machine and transmitter.The particular design and embodiment of radio communication subsystem 124 can depend on what mobile terminal 100 was supported
One or more communication networks.For example, mobile terminal 100 may include being designed to support LTE, 3G, GSM network, GPRS nets
Network, EDGE network, Wi-Fi or WiMax network and BlueboothTMThe communication subsystem 124 of network.
Audio subsystem 126 can be coupled with loud speaker 128 and microphone 130, to help to implement to enable voice
Function, such as speech recognition, speech reproduction, digital record and telephony feature.I/O subsystems 140 may include touch screen control
Device 142 processed and/or other one or more input controllers 144.Touch screen controller 142 may be coupled to touch screen 146.It lifts
For example, the touch screen 146 and touch screen controller 142 can be detected using any one of a variety of touch-sensing technologies
The contact and movement or pause carried out therewith, wherein detection technology include but is not limited to capacitive character, resistive, infrared and table
Face technology of acoustic wave.Other one or more input controllers 144 may be coupled to other input/control devicess 148, such as one
Or the pointer device of multiple buttons, rocker switch, thumb wheel, infrared port, USB port, and/or stylus etc.It is described
One or more button (not shown)s may include the up/down for 130 volume of controlling loudspeaker 128 and/or microphone
Button.
Memory interface 102 can be coupled with memory 150.The memory 150 may include that high random access is deposited
Reservoir and/or nonvolatile memory, such as one or more disk storage equipments, one or more optical storage apparatus, and/
Or flash memories (such as NAND, NOR).Memory 150 can store an operating system 172, for example, Android, iOS or
The operating system of Windows Phone etc.The operating system 172 may include for handling basic system services and execution
The instruction of task dependent on hardware.Memory 150 can also store one or more programs 174.When mobile device is run,
Meeting load operating system 172 from memory 150, and executed by processor 104.Program 174 at runtime, also can be from storage
It loads in device 150, and is executed by processor 104.Program 174 operates on operating system, utilizes operating system and bottom
The interface that hardware provides realizes the various desired functions of user, such as instant messaging, web page browsing, pictures management.Program 174 can
Can also be that operating system is included to be independently of operating system offer.In addition, program 174 is mounted to mobile terminal
When in 100, drive module can also be added to operating system.Program 174 may be arranged on an operating system by one or more
A processor 104 executes relevant instruction.In some embodiments, mobile terminal 100 is configured as executing according to the present invention
Driving condition detection method.Wherein, one or more programs 174 of mobile terminal 100 include according to the present invention for executing
The instruction of driving condition detection method.
Mobile terminal 100 can be the portable electronic devices such as smart mobile phone, tablet computer, but not limited to this.It is specific next
It says, the camera sub-system 120 and optical sensor 122 in mobile terminal 100 are infrared photography module, being capable of continuous acquisition user
Face's infrared image generates sequence of frames of video.Infrared photography module includes the device of active infrared line transmitting, acquisition frame rate
It is generally per second not less than 15 frames, can mobile terminal be fixed on to the front of driver, so as to facial image is collected,
The video frame for not capturing face is weeded out by the overall intensity of image.If continuous multiple frames do not collect facial image meeting
There is corresponding prompt.The company of the information that driving condition judges mobile terminal acquisition with infrared photography module in driving procedure
Continuous face video frame.
Fig. 2 shows the schematic flow charts of driving condition detection method according to an embodiment of the invention.Such as Fig. 2 institutes
Show, in step s 200, a video frame is often collected to infrared photography module, the position of eyes is positioned simultaneously from the video frame
Extract eye pattern.
Reflection and refractive index difference of the different parts to infrared light, according to corneal reflection principle, infrared light is by table before cornea
Face is reflected, and forms a fritter bright area, i.e. flare on the image, and in face gray level image, pupil portion color is most deep,
Flare corresponds to a most bright point in eye image, and rest part gray value falls between, therefore can pass through
Pupil is to the flare of infrared light in the picture with respect to the intensity profile Pattern localization facula position of surrounding, indirect addressing eyes
Position.
The position of eyes of user can be determined based on pupil in video frame to the position of the flare of infrared light;And
According to the position of identified eyes, to the video frame extraction eye pattern.
As can be seen that flare area grayscale value has larger gradient and light with respect to pupil iris region from video frame
Spot center gray scale tends to be saturated substantially, i.e. flare respective pixel value saturation or closest to saturation, and surrounding picture in the picture
Element value is relatively much smaller.
According to one embodiment of present invention, intensity profile that can be based on flare Yu surrounding pupil iris region,
The zone of saturation of flare is determined by binary image.For example, by Binary Sketch of Grey Scale Image, certain gray scale etc. can be taken
Boundary line of the grade as differentiation.If it is 255 (complete white) to make the gray value of grade 1, grade 2 is 0, each picture in the picture
The value that element will take in 0 and 1.Since noise jamming is likely to be obtained several zones of saturation, need to these zones of saturation into
Row filtering.
For example, being more than given threshold value and area, depth-width ratio using vertical filter and horizontal filter filtering edge gradient
Zone of saturation in the reasonable scope, you can navigate to facula position.Carrying out edge detection using pixel gradient filter can be with
Navigate to the profile of flare.Wherein, edge detection is the state by detecting each pixel and its neighborhood, to determine pixel
Whether it is located on the boundary of object.Edge detection algorithm is mainly based upon the single order and second dervative of image intensity, but derivative
Calculating is very sensitive to noise, filter can be used to carry out convolution with image, to improve edge detection related with noise.Example
Such as, using bottom profiled on vertical filter extraction hot spot, the left and right profile of hot spot is extracted using horizontal filter, so that it is determined that light
The ratio of width to height of spot.For in canthus or the issuable interference small light spot of eyelashes, can be carried out by facula area and the ratio of width to height
Filtering is rejected.The center that hot spot determines that eyes substantially is navigated to, and then extracts eye pattern and carries out eyelid segmentation and iris point
It cuts.
In step S210, eyelid segmentation and iris segmentation can be carried out to the eye pattern of extraction.
Since between eyelid and sclera and iris or so has larger gradient between sclera, vertical direction can be based on
Upper gradient between palpebra inferior and sclera passes through palpebra inferior curve on fitting of a polynomial;And based on iris in horizontal direction
Gradient between the left and right sides and sclera is justified by least square fitting and determines iris outer profile curve.
For example, the discrete upper eyelid curve segment comprising noise is navigated to using gradient filter, due to of positioning
It is part, it is therefore desirable to which curve matching can just obtain complete eyelid curve.Polynomial fitting curve from geometric meaning just
It is the curve sought with the quadratic sum of set point distance minimum.For example, determining the larger line segment of gradient on vertical direction, these are connected
Larger gradient line segment, since quadratic function property more tallies with the actual situation, while in order to avoid higher order functionality may be because noise
And over-fitting, the curve of these line segments composition can be fitted with the secondary convex function that Open Side Down.It is similarly fitted palpebra inferior curve,
Details are not described herein.
Iris profile by upper palpebra inferior due to being blocked, and bottom profiled is difficult to be accurately positioned sometimes on iris, can utilize level
The gradient filter in direction positions discrete iris contour curve segment, for example, determining that iris right boundary horizontal gradient is larger
Line segment, connect the line segment of these larger gradients, then parameters are estimated by least square method, with circle be fitted these curves
Segment, so that it is determined that the outer profile of iris.It can also be carried out curve fitting using other modes, this programme does not limit this.
Fig. 3 shows the schematic diagram of upper palpebra inferior curve and iris curve according to an embodiment of the invention.
In step S220, eyes aperture can be calculated according to segmentation result, wherein eyes aperture is upper palpebra inferior curve
The area of the lap for the figure that the figure surrounded is surrounded with iris outer profile curve, is enclosed with iris outer profile curve
At figure area ratio.As shown in figure 3, dash area is the figure and iris foreign steamer that upper palpebra inferior curve is surrounded
The lap for the figure that wide curve is surrounded.
According to one embodiment of the invention, eyes aperture of the user under non-fatigue state can be obtained, as initial opening.
Aperture is the stretching degree of eyes, for a certain individual, under non-fatigue state, the apertures of eyes compared with
Greatly, i.e. the area of dash area is larger, can first be initialized under the non-fatigue state of user before carrying out driving condition detection
The area of the figure that the upper palpebra inferior normally opened eyes is surrounded and the lap of the surrounded figure of iris outer profile is S0,
The area of the surrounded figure of iris outer profile is S, then initial opening is a0=S0/ S, as the benchmark for judging eyes closed state
Value.
In step S230, it can judge whether eyes are closed according to eyes aperture.
According to one embodiment of the invention, it can be based on initial opening, determine that predetermined threshold, the predetermined threshold are initially to open
The product of degree and pre-determined factor;In the case where eyes aperture is less than predetermined threshold, judge eyes for closed state.
It is surrounded from the surrounded figure of palpebra inferior curve in certain t moment is calculated in the eye pattern of extraction with iris outer profile curve
The area S of the lap of figuretWith the ratio a of the area S of the surrounded figure of iris outer profile curvet, at=St/S.It is predetermined
Coefficient can be adjusted according to actual conditions, and it is 0.5, a that can rule of thumb take pre-determined factort<0.5*a0, then it is believed that eye
Eyeball is in closed state, otherwise to open state.
It can will be stored in circular linked list according to eyes closed state determined by video frame, to count wherein eyes
For the accounting of the video frame of closed state.
, can be according in the continuous video frame of the predetermined quantity including current video frame in step S240, eyes are
The accounting of the video frame of closed state judges whether user is fatigue driving.
For example, the eyes closed state of nearest 30 frame of statistics, eyes closed state is counted by traversing 30 frame historical records
More than 0.5*a0Frame number, if frame number be more than 15 if judge that driver is currently at fatigue driving state.Judging that user is
After fatigue driving, the first warning information can be sent out, user is prompted to be currently at fatigue state, parking is needed to rest.
According to one embodiment of the invention, it is also based on center and iris center of the pupil to the flare of infrared light
Relative position calculates visual angle, and the direction at visual angle is that the direction at iris center is directed toward at the center of flare.
Fig. 4 shows flare center according to an embodiment of the invention and the visual angle of iris center relative position
Schematic diagram.As shown in figure 4, O is iris center, P is the center of flare, and view directions are that the center P of flare is directed toward
The direction of iris center O.
In the case that head is static, when eye movement, the relative position of pupil center and spot center can change,
The situation of change of eye sight line direction and blinkpunkt can be obtained by relative position relation.In eyes gray level image, pupil
Part colours are most deep, and flare corresponds to a most bright point in eye image, the gray value of rest part between the two it
Between.Gray level image progress binary conversion treatment is obtained into bianry image, is recorded after the maximum hot spot in bianry image is extracted
Facula position, the center of geometric center, that is, flare.Since the gray value of pupil image is relatively low, the gray value of iris image
Higher, gray value is widely varied in the two adjacent edges, and the edge detection method based on gradient filter may be used
Iris boundary point is extracted, then is justified by least square fitting and determines iris outer profile, so that it is determined that iris central point.
When people faces front, hot spot falls the flare at iris center, strabismus and can fall opposite with direction of visual lines
Other side iris on, the angle of strabismus is bigger, and flare distance iris other side profile and border is closer.
Therefore, the radius of the distance between center that can be based on flare and iris center and iris, calculating regard
Angle:
Wherein, α is visual angle, and R is iris radius, and d is the distance between flare center O and iris center P (such as Fig. 4
It is shown).If the sight deflection on testing level direction, d is the distance of horizontal direction.Here regarding for horizontal direction is more paid close attention to
Angular deflection.Because of the case where visual angle deflection comes back and bows in the vertical direction, the opening ratio of eyes is normally small, then can recognize
To be in fatigue state.
It can will be stored in historical record according to visual angle determined by video frame, historical record is preserved using circular linked list
The corresponding visual angle of predetermined quantity video frame, to count the accounting that wherein eyes do not face the video frame in front.Wherein, it recycles
The characteristics of list structure is need not to increase amount of storage, only to the on-link mode (OLM) slight changes of table, you can so that list processing is more square
Just flexibly.
It can judge whether eyes face front according to visual angle;And even according to the predetermined quantity including current video frame
In continuous video frame, eyes do not face the accounting of the video frame in front, judge whether user's attention is concentrated.If visual angle is more than
Zero is thought that eyes do not face front, if by the visual angle of nearest 30 frame in traversal loop chained list, the frame number for not facing front is big
In predetermined threshold, such as larger than 10 frames, such case may be that user does not drive conscientiously, it may be possible to which at this moment fatigue driving can also be sent out
Go out warning information, user is prompted to focus on.
Fig. 5 shows the schematic flow chart of driving condition detection according to an embodiment of the invention.It reads first red
The video flowing of outer camera module acquisition, judges whether to collect facial image, and eyes are carried out to the video frame comprising facial image
Then positioning carries out eyelid segmentation and iris segmentation, by being fitted obtained upper palpebra inferior curve and iris outer profile curve, lead to
The aperture for crossing calculating eyes judges whether eyes are closed state, and eyes aperture surrounds figure and iris by upper palpebra inferior curve
The ratio of the area and the surrounded graphics area of iris outer profile of the overlapping region of the surrounded figure of outer profile, by its with initially open
Degree compares, and judges whether eyes are closed state, and the visual angle of eyes is central point and iris central point according to flare
Relative position is calculated.Then comprehensive statistics include current video frame the continuous video frame of predetermined quantity in, eyes are not
Accounting and the eyes of the video frame in front are faced as the accounting of closed state, judge driving condition, it can be according to different accountings
Different driving conditions is determined with index, such as waking state, slight fatigue driving state, severe fatigue driving state, according to not
Same state sends out different early warning informations.At every predetermined time or scheduled frame number updates the history in circular linked list
Record.
Fig. 6 shows the mobile terminal monitoring state according to an embodiment of the invention using driving condition detection method
Schematic diagram.As shown in fig. 6, mobile terminal can identify face, based on the positioning and analysis to eyes, current driving shape is judged
State can give a warning when being judged as fatigue driving, prompt user Don't Drive When Tired, while can show lasting driving when
Between, the statistics of driving condition in continuous driving time can also provide personalized service by user setting.
Scheme according to the present invention acquires continuous videos frame sequence by using infrared photography module, carries out recognition of face
It is positioned with eyes, by carrying out eyelid segmentation and the upper palpebra inferior curve of iris segmentation determination and iris outer profile curve to eye pattern,
Then eyes closed state is judged according to the aperture of eyes and visual angle and whether faces front, the eyes of comprehensive analysis continuous multiple frames
Closed state accounting and the accounting for not facing front, judge driving condition and provide warning information.It is opened calculating eyes
The individual difference that people is considered when spending, judges whether eyes are closed state, this method according to the different a reference values of eyes aperture
Additional device is not needed, it is only necessary to application program be installed in mobile terminal and execute corresponding instruction, driving condition detection method
It is general, easy, the accuracy of driving condition detection can be improved.
A7, the method as described in A6, wherein described to judge that the step of whether eyes are closed includes according to eyes aperture:Base
In initial opening, determine that predetermined threshold, the predetermined threshold are the product of initial opening and pre-determined factor;It is less than in eyes aperture
In the case of predetermined threshold, judge eyes for closed state.
A8, the method as described in A2, wherein the center and iris center based on pupil to the flare of infrared light
Relative position, calculate visual angle the step of include:The distance between center and iris center based on flare and iris
Radius, calculate visual angle.
A9, the method as described in A8, wherein the visual angle is calculated by following formula:
Wherein, α is visual angle, and R is iris radius, and d is the distance between flare center and iris center.
It should be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, it is right above
In the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure or
In person's descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. claimed hair
The bright feature more features required than being expressly recited in each claim.More precisely, as the following claims
As book reflects, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows specific real
Thus the claims for applying mode are expressly incorporated in the specific implementation mode, wherein each claim itself is used as this hair
Bright separate embodiments.
Those skilled in the art should understand that the module of the equipment in example disclosed herein or unit or groups
Part can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example
In different one or more equipment.Module in aforementioned exemplary can be combined into a module or be segmented into addition multiple
Submodule.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment
Change and they are arranged in the one or more equipment different from the embodiment.It can be the module or list in embodiment
Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or
Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it may be used any
Combination is disclosed to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so to appoint
Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power
Profit requires, abstract and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
One of meaning mode can use in any combination.
Various technologies described herein are realized together in combination with hardware or software or combination thereof.To the present invention
Method and apparatus or the process and apparatus of the present invention some aspects or part can take embedded tangible media, such as it is soft
The form of program code (instructing) in disk, CD-ROM, hard disk drive or other arbitrary machine readable storage mediums,
Wherein when program is loaded into the machine of such as computer etc, and is executed by the machine, the machine becomes to put into practice this hair
Bright equipment.
In the case where program code executes on programmable computers, computing device generally comprises processor, processor
Readable storage medium (including volatile and non-volatile memory and or memory element), at least one input unit, and extremely
A few output device.Wherein, memory is configured for storage program code;Processor is configured for according to the memory
Instruction in the said program code of middle storage executes method of the present invention.
By way of example and not limitation, computer-readable medium includes computer storage media and communication media.It calculates
Machine readable medium includes computer storage media and communication media.Computer storage media storage such as computer-readable instruction,
The information such as data structure, program module or other data.Communication media is generally modulated with carrier wave or other transmission mechanisms etc.
Data-signal processed embodies computer-readable instruction, data structure, program module or other data, and includes that any information passes
Pass medium.Above any combination is also included within the scope of computer-readable medium.
In addition, be described as herein can be by the processor of computer system or by executing for some in the embodiment
The combination of method or method element that other devices of the function are implemented.Therefore, have for implementing the method or method
The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, device embodiment
Element described in this is the example of following device:The device is used to implement performed by the element by the purpose in order to implement the invention
Function.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc.
Description plain objects are merely representative of the different instances for being related to similar object, and are not intended to imply that the object being described in this way must
Must have the time it is upper, spatially, in terms of sequence or given sequence in any other manner.
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited from
It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that
The language that is used in this specification primarily to readable and introduction purpose and select, rather than in order to explain or limit
Determine subject of the present invention and selects.Therefore, without departing from the scope and spirit of the appended claims, for this
Many modifications and changes will be apparent from for the those of ordinary skill of technical field.For the scope of the present invention, to this
The done disclosure of invention is illustrative and not restrictive, and it is intended that the scope of the present invention be defined by the claims appended hereto.