Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
The embodiment of the present invention provides a kind of control method of terminal booting, as shown in Figure 1, comprising:
S101, by each shooting time in N number of shooting time in preset time, the first camera and second camera
The image comprising gesture operation of shooting simultaneously synthesizes N number of three-dimensional image respectively;
S102, human body contour outline in corresponding with the first image three-dimensional image is extracted, wherein the first image is
Any one image of the shooting of the first camera described in N number of shooting time;
S103, on three-dimensional image corresponding with the first image, obtain with the human body contour outline at least one
The corresponding range information of a pixel;
At least one pixel pair that each three-dimensional image determines in N number of three-dimensional image described in S104, comparison
Whether the difference between range information answered is in preset threshold range;
S105, if so, generate power-on instruction, and according to the power-on instruction controlling terminal open.
The embodiment of the present invention provides a kind of terminal start-up control method and system, by imaging the first camera and second
At least one image synthesis three-dimensional image comprising human body that head synchronization is shot respectively, and based on first camera shooting
The corresponding distance letter of at least one pixel in human body contour outline is obtained on the corresponding three-dimensional image of two dimensional image of head shooting
Breath, whether the difference between range information that the pixel of N number of three-dimensional image more described later determines is in preset threshold
Range, before can determine whether user is in terminal by the comparison result, and if N number of range information difference certain
In threshold range, then illustrates the position of user and do not have jump at a distance from terminal relatively, such as can determine whether out that user is
No to be look at TV and kept for a period of time, the opposite accuracy for improving identification user location and generating power-on instruction, that is, exist
When the difference of multiple range information meets preset threshold range, it may be determined that user has the intention of viewing terminal such as TV, goes forward side by side
The generation power-on instruction of one step, controlling terminal are opened, and compared with prior art, whether useful eliminate infrared detection technology perception
Mode existing for family is easy the problems such as by surrounding environment influence, accuracy of identification and poor sensitivity, the controlling party of this terminal booting
Method and system pass through the ranging of dual camera three-dimensional information and combine human bioequivalence algorithm, it may be determined that the booting of user is intended to, automatically
Controlling terminal is opened, while ensuring high real-time, high-precision, increases substantially the manipulation experience of user.
A kind of executing subject of the control method of terminal booting of the embodiment of the present invention is the processor of terminal, which can
Think TV, computer etc., the embodiment of the present invention is not construed as limiting this, and first camera and second camera are for obtaining human body
Image, first camera and second camera can be the camera being arranged at the terminal.
In the embodiment of the present invention, whether first camera and second camera induction user are before terminal, the first camera shooting
Head and second camera can periodically shoot several photos and carry out human bioequivalence, if discovery user appears in camera
Before, at least image comprising user's human body is obtained, user can be static, or it is mobile, in addition, use can also be passed through
Family is manually entered the start information of the mobile controlling terminal of user, as user presses setting starting user's movement in terminal remote control
The start button of identification technology, then after getting the enabled instruction of start button triggering, processor control described first is taken the photograph
A mobile at least image is carried out as head and second camera obtain user.
Wherein, preset time, which refers to, needs to monitor user and carries out the time required for n times are shot, and preset time can also be with
It is set in advance, such as can set 2s-5s for the preset time;Specifically can by the timer that is arranged in the processor with
It realizes.Within the period of 2s-5s, the image containing human body that will acquire is buffered in the storage of terminal by the sequencing of acquisition
It in device, when needing to identify, is obtained from memory by processor, since the first camera and second camera are in 1s
It is interior to shoot 10 ~ 60 picture frames, it is preferred that be 25 ~ 30 picture frames, since the first camera and second camera are shot
Human body may be a dynamic process, therefore each frame picture frame is discrepant, therefore when selection synthesizes three-dimensional image,
The frame image shot by choosing the first camera and second camera in synchronization, the three-dimensional that can be avoided the formation of in this way
Difference between stereo-picture and actual user's gesture improves identification accuracy.If user's selection stands still, the
One second camera can only shoot within a preset time one or shoot multiple select one as subsequent identification process
Input basis.
It wherein, optionally, within a preset time in total include M shooting time, each according to the shooting performance of camera
The first camera of shooting time and second camera, which are all shot, photo, can choose the first camera described in M shooting time
The M three-dimensional images of image synthesis comprising human body shot simultaneously respectively with second camera, can also choose N number of shooting
The synthesis N of moment shooting opens three-dimensional images, wherein M >=N;
Image is a picture of camera shooting, and picture frame is then a series of figures being continuously shot in the set time
Piece, image frame sequence are made of a series of images.
Certainly, selection synthesize three-dimensional image when, can choose several images that the first camera is continuously shot with
And each image synthesizes three-dimensional image (wherein, second camera in several images being continuously shot of second camera
The time of every image of shooting is corresponding in the photo of synchronization shooting with the first camera).
Wherein, include human body at least one that shoots the first camera and second camera in synchronization respectively
Image synthesis three-dimensional image mode, be not belonging to primary object of the invention, there are a variety of realities in the prior art
Existing mode, the embodiment of the present invention to this without limiting, due to for the first camera and second camera within a preset time
The mode and principle of every image synthesis three-dimensional image of shooting are all the same, and the embodiment of the present invention is only with the second image and the
It is illustrated for three images, wherein the second image and third image are respectively within a preset time by the first camera and the
At least one in the image that synchronization is shot respectively of two cameras, and do not have any indicative meaning.
Illustratively, as shown in Fig. 2, step S101 can be accomplished by the following way,
S1011, each pixel for obtaining second image;
Wherein, for the concrete mode of each pixel of the second image of acquisition, details are not described herein for the embodiment of the present invention,
It can be realized by the prior art, for example, particle filter.
After getting each pixel of the second image, coordinate can be arranged with second image and third image
System, then each pixel on the second image and third image can be indicated with the form of coordinate, as shown in Figure 3a and Fig. 3 b
Shown, there may also be other modes to corresponding pixel on the second image of uniquely tagged and third image, this hair certainly
Details are not described herein for bright embodiment.
It should be noted that can also first extract the human body of second image when obtaining three-dimensional image
Profile obtains each pixel in the human body contour outline of second image after extracting human body contour outline, is based on each institute
The each pixel stated in human body contour outline executes step S1012, can be further improved accuracy of identification in this way, avoids vertical in three-dimensional
Background or interference are introduced in body image.
S1012, preset window is established as center pixel using each pixel of second image;Wherein, described pre-
If window includes the M pixel according to pre-determined distance, centered on the central pixel point;
Fig. 3 a is the schematic diagram that any one pixel is center pixel establishes preset window in the second image, is preset
Window can extend L the central pixel point surrounding (upper and lower, left, by) is each by centered on the central pixel point
The region that length unit is included, i.e., the described pre-determined distance are that then each pixel of above-mentioned M is with the central pixel point four to 2L
All all pixels points respectively extended in the region that L length unit is included;The embodiment of the present invention to the specific size of the L not
It is defined, the precision that can reach according to actual needs is set.
S1013, the pixel value for obtaining the preset window;
Due to including M pixel in preset window, therefore the pixel value of the preset window is M pixel gray value
Summation, details are not described herein for the concrete mode embodiment of the present invention of the gray value of pixel each for calculating, for example, if described pre-
It then include 5 in the preset window if window is by center pixel of any one pixel to each pixel of from left to right
A pixel, the pixel value of the preset window are the summation of 5 pixel gray values.
S1014, according to the pixel value of the preset window, the picture with the preset window is extracted from the third image
The element value the smallest region of difference value is target area, as shown in Figure 3b;
Due to establishing preset window for the second each pixel of image kind, and according to the pixel value of preset window from described
The mode and principle for the target area found in third image are all the same, thus the embodiment of the present invention only by taking the first pixel as an example into
Row explanation, which is any one pixel in the second image, and does not have indicative meaning.
Illustratively, as shown in figure 4, step S1014 can be accomplished by the following way:
S10141, it determines coordinate of first pixel in second image, and is with first pixel
The first preset window is established at center;As shown in Figure 3a;
S10142, in the case where keeping the first pixel ordinate constant, chosen from the third image every
A candidate region, the window size of the candidate region is identical as the first preset window size, and the candidate region is
It is established using any one pixel in the third image as center pixel, each pixel in the candidate region
Ordinate is identical as the ordinate of first pixel;
Wherein, the window size or window distance of the candidate region refer to any one center pixel in candidate region
Point, it is each in the central pixel point surrounding (upper and lower, left, by) centered on the central pixel point according to pre-determined distance 2L
Extend the region that L length unit is included;
S10143, the pixel value for calculating each candidate region, the pixel value refer to all pixels in candidate region
The sum of the gray value of point;
S10144, by the smallest candidate of difference value of the pixel value of the candidate region and the pixel value of the preset window
Region is determined as target area.
Wherein, when getting the coordinate of the first pixel, first pixel can be directed toward the from third image
First pixel in the case where keeping ordinate constant, is traversed any one of described third image by the direction of two images
Pixel, and SAD (Sum of Absolute Difference) or SSD (Sum of Squared can be passed through
Difference) algorithm matching mode is extracted the smallest with the value differences of preset window value from third image
Region is target area, d point as shown in Figure 3c.
It certainly,, can be from the third image after the coordinate for getting the first pixel in order to reduce calculation amount
It is identical as the first pixel ordinate, more than or equal to choosing target area in the candidate region of abscissa.
Certainly, the embodiment of the present invention can also be based on third image, chosen in the second image with it is any in third image
The smallest region of value differences of the preset window of one pixel building is target area, at this point, should be according to the second image
It is directed toward the direction of third image, in the case where keeping ordinate constant, each pixel in third image is constituted default
Window traverses the candidate region of second image, to obtain target area.
S1015, the central pixel point for determining each target area;
S1016, the central pixel point of the central pixel point of each second image and the target area is carried out
Match, obtains three-dimensional image corresponding with second image.
Preferably, in order to improve accuracy of identification, need to extract the human body contour outline in the first image, in this human body wheel
On the basis of exterior feature, the Pixel Information of each pixel is obtained, and obtains corresponding pixel from three-dimensional image
Range information since the human body of user should be at same plane, thus possesses similar pixel range information, therefore in identification
Before, averaging operation can be carried out to the corresponding pixel of human body in three-dimensional image distance, so as to the human body in human body contour outline
It is separated with interference informations such as backgrounds, thus the high-precision human body for extracting user.
Further, the human body contour outline extracted in the corresponding three-dimensional image of the first image, comprising:
S1021, the horizontal histogram that range information is established to three-dimensional image corresponding with the first image and longitudinal direction are straight
Fang Tu;
S1022, the lines detection that least square method algorithm is carried out based on the horizontal histogram and longitudinal histogram
Processing;
S1023, the lateral straight line with identical ordinate is extracted in by lines detection treated horizontal histogram,
And longitudinal straight line with identical abscissa is extracted in longitudinal histogram.
S1024, the corresponding 3 dimensional drawing of the first image is obtained according to the lateral straight line and longitudinal straight line
The human body contour outline of picture.
There are many modes extracted for human body contour outline, and details are not described herein for the embodiment of the present invention, illustratively, this method
It can be realized by using eight neighborhood search method.
For step S104, at least one of each three-dimensional image determination in getting N number of three-dimensional image
After the corresponding range information of pixel, need the difference between more N number of range information whether in preset threshold range, for example,
In preset time 5 seconds, the first camera and second camera have taken 10 two dimensional images respectively, to each pair of youngster's two dimensional image
It is synthesized to obtain three-dimensional image, the corresponding distance letter of 10 human body contour outlines can be finally got from three-dimensional image
Breath, if the mutual difference of the numerical value of the distance of this 10 human body contour outlines in a preset threshold range, such as 10cm, then
Show that user moves in a lesser scope of activities or static before terminal, that is, further demonstrates that user in terminal precedent
As at least stopped before television set preset time 5 seconds, in this case, show that user has the intention of viewing TV.Then continue
Execute S105 step.
Further, before step S105 generates power-on instruction further include:
S1041: eye recognition is carried out to the image comprising human body of N number of the first camera of shooting time shooting, obtains and uses
The eye profile at family;
Eye recognition is carried out in two dimensional image, there are many implementations in existing technology, such as need to first
Image carries out skin color segmentation;Edge detection is carried out to the first image after progress skin color segmentation;Further get user
Eye profile.
S1042: the corresponding eye profile variations information of adjacent moment in N number of shooting time, and and people are identified frame by frame
Eye feature database is matched, and the human eye feature library prestores the human eye action of user;
S1043: it is chosen and the adjacent the smallest people of human eye profile variations information gap from the human eye feature library
Eye movement is used as target human eye action;
S1044: if the target human eye action meets preset power-on instruction requirement, power-on instruction is generated.
Assuming that user sleeps before television set, then the eye profile of user is always closure, can further judge to use at this time
Family is not intended to see TV, then need not generate starting up's instruction.If user normally opens eyes before television set, in N number of bat
It takes the photograph the moment, at least takes an eye profile when user opens one's eyes.
It, can be according to the human eye between the multiple adjacent two dimensional images got specifically when identifying the eye motion of user
Profile variations information is by track algorithm, for example, joint probability data association filter (JPDAF), multiple hypotheis tracking (MHT)
Algorithm, dynamic multidigit allocation algorithm etc. are matched with user's human eye action that human eye feature library prestores, to identify current use
The corresponding target human eye action of family eye profile, such as eye opening or rapid eye movements etc., and execute corresponding to the target human eye action
Operational order.Generate corresponding power-on instruction.For example, the human eye action that system identification goes out user is rapid eye movements, correspond to
Target human eye action meet preset power-on instruction and require that (it may include quickly blinking that power-on instruction, which requires corresponding human eye action,
Eye, opens eyes always, or normal open eyes shows that user is look at TV, and has booting wish), then after system identification, it can generate
Corresponding power-on instruction.
Those skilled in the art should know, similar to automatic turn-on function, correspond to television auto power-off, still can be with
It, by camera detection to user's rapid eye movements, may also indicate that suitable for the recognition methods if television set is in playing process
User wishes closing television or makes television standby, then can transmit a signal to the power management module of TV at this time, execute pass
Machine instruction.
The embodiment of the invention also provides a kind of control systems of terminal booting, as shown in figure 5, a kind of terminal booting
Each function in control system is corresponding with the control method that terminal a kind of in the above embodiment of the present invention is switched on, specifically can be with
With reference to the description of the above embodiment of the present invention, details are not described herein for the embodiment of the present invention.As shown in figure 5, a kind of terminal booting
Control system, be applied to terminal 60, comprising: the first camera 601 and second camera 602 at the terminal is arranged in parallel,
Operate in image processing system 603, image identification system 604 and the execution system 605 on the terminal handler;
Wherein, first camera 601 and the second camera shooting 602 are in same horizontal line;
First camera 601 and the second camera shooting 602, for shooting at least one figure comprising human body in preset time
Picture;
Described image processing system 603, for by each shooting time in N number of shooting time in preset time,
The image comprising human body that one camera and second camera are shot simultaneously respectively synthesizes N number of three-dimensional image;
Described image identifying system 604, for extracting the human body contour outline in the corresponding three-dimensional image of the first image,
In, the first image is any one image of the shooting of the first camera described in N number of shooting time;
On three-dimensional image corresponding with the first image, obtain and at least one pixel in the human body contour outline
The corresponding range information of point;
Compare each three-dimensional image determines in N number of three-dimensional image at least one pixel it is corresponding away from
From the difference between information whether in preset threshold range;
The execution system 605, for when the judgment result is yes, generating power-on instruction, and according to the power-on instruction
Controlling terminal is opened.
The embodiment of the present invention provides a kind of terminal opening control system, by the way that the first camera and second camera is same
At least one image synthesis three-dimensional image comprising human body that moment shoots respectively, and shot based on first camera
The corresponding three-dimensional image of two dimensional image on obtain human body contour outline in the corresponding range information of at least one pixel, later
Whether compare the difference between the range information of the pixel determination of N number of three-dimensional image in preset threshold range, leads to
Cross before the comparison result can determine whether user is in terminal, and if N number of range information difference in certain threshold value model
In enclosing, then illustrates the position of user and do not have jump at a distance from terminal relatively, such as can determine whether out whether user is infusing
It depending on TV and is kept for a period of time, the opposite accuracy for improving identification user location and generating power-on instruction, i.e., multiple
When the difference of range information meets preset threshold range, it may be determined that user has the intention of viewing terminal such as TV, and further
Power-on instruction is generated, controlling terminal is opened, and compared with prior art, eliminates whether infrared detection technology perception has users
Mode be easy by surrounding environment influence, accuracy of identification and poor sensitivity the problems such as, this terminal booting control method and be
System passes through the ranging of dual camera three-dimensional information and combines human bioequivalence algorithm, it may be determined that the booting of user is intended to, and automatically controls eventually
End is opened, while ensuring high real-time, high-precision, increases substantially the manipulation experience of user.
Optionally, as shown in fig. 6, described image processing system 603 includes:
First acquisition unit 6031, for obtaining each pixel of second image;
Unit 6032 is established, for establishing preset window as center pixel using each pixel of second image;
Wherein, the preset window includes the M pixel according to pre-determined distance, centered on the central pixel point;
Second acquisition unit 6033, for obtaining the pixel value of the preset window
Extraction unit 6034, for the pixel value according to the preset window, extracted from the third image with it is described
It is target area that the value differences of preset window, which are worth the smallest region,;
Determination unit 6035, for determining the central pixel point of each target area;
Generation unit 6036, for by the middle imago of the central pixel point of each second image and the target area
Vegetarian refreshments is matched, and three-dimensional image corresponding with second image is obtained.
Optionally, the extraction unit 6034 includes:
Determining module, for determining coordinate of first pixel in second image, and with first picture
The first preset window is established centered on vegetarian refreshments;
Module is chosen, is used in the case where keeping the first pixel ordinate constant, from the third image
All candidate regions identical as the first preset window size are selected, the candidate region is to appoint in the third image
Pixel of anticipating is established for center pixel, and the ordinate of each pixel in the candidate region and described first
The ordinate of pixel is identical;
Computing module, for calculating the pixel value of each candidate region, the pixel value refers to institute in candidate region
There is the sum of the gray value of pixel;
Determination module, for by the margin of image element in the pixel value of all candidate regions with first preset window
The different the smallest candidate region of value is determined as target area.
Optionally, described image identifying system 604 includes contours extract unit and pixel extraction unit, and the profile mentions
Unit is taken to be specifically used for:
The horizontal histogram and longitudinal histogram of range information are established to three-dimensional image corresponding with the first image;
The lines detection processing of least square method algorithm is carried out based on the horizontal histogram and longitudinal histogram;
The lateral straight line with identical ordinate, Yi Ji are extracted in by lines detection treated horizontal histogram
Longitudinal straight line with identical abscissa is extracted in longitudinal histogram;
The people of the corresponding three-dimensional image of the first image is obtained according to the lateral straight line and longitudinal straight line
Body profile.
Optionally, described image identifying system 604 further includes recognition unit, and the recognition unit includes:
Human eye analysis module, for carrying out human eye to the image comprising human body that N number of the first camera of shooting time is shot
Identification, obtains the eye profile of user;
Human eye matching module, for identifying the corresponding eye profile variations of adjacent moment in N number of shooting time frame by frame
Information, and matched with human eye feature library, the human eye feature library prestores the human eye action of user;
Object selection module is chosen with the adjacent human eye profile variations information gap most from the human eye feature library
Small human eye action is as target human eye action;
Instruction control module, for when the target human eye action meets preset power-on instruction and requires, described in control
Execution system generates power-on instruction.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit
It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that the independent physics of each unit includes, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can store and computer-readable deposit at one
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes the portion of each embodiment the method for the present invention
Step by step.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, abbreviation
ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic or disk etc. are various can store
The medium of program code.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.