CN109660779A - Touch-control independent positioning method, projection device and storage medium based on projection - Google Patents

Touch-control independent positioning method, projection device and storage medium based on projection Download PDF

Info

Publication number
CN109660779A
CN109660779A CN201811563670.9A CN201811563670A CN109660779A CN 109660779 A CN109660779 A CN 109660779A CN 201811563670 A CN201811563670 A CN 201811563670A CN 109660779 A CN109660779 A CN 109660779A
Authority
CN
China
Prior art keywords
depth
image
connected region
value
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811563670.9A
Other languages
Chinese (zh)
Inventor
陈维亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN201811563670.9A priority Critical patent/CN109660779A/en
Publication of CN109660779A publication Critical patent/CN109660779A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Abstract

The embodiment of the present application provides a kind of touch-control independent positioning method, projection device and storage medium based on projection.In the embodiment of the present application, in the depth difference of the depth image according to interaction body before and after being interacted with projected picture, determine interaction body when interacting with projected picture at the touch point position on projected picture, first from depth difference image, determine that connected region of the depth value of pixel within the scope of certain depth, these connected regions are corresponding with touch point respectively;Further, in the connected region determined, further reduce depth bounds, the pixel in the depth bounds after diminution is determined in each connected region, and then the position of touch point is determined according to these pixels, the physical location that the position for the touch point determined can be made to interact closer to interaction body with projected picture, and then the positioning accuracy to touch point on projected picture can be improved.

Description

Touch-control independent positioning method, projection device and storage medium based on projection
Technical field
This application involves projection art more particularly to a kind of touch-control independent positioning method, projection devices based on projection And storage medium.
Background technique
With the continuous development of shadow casting technique, the projection device with interactive function comes into being.For example, projection sound equipment, The fitments such as projection lamp have been widely used in people's lives, bring great convenience to people's lives.
For the projection device with interactive function, image content can be projected and form shadowgraph on projection plane Face, user carries out touch control operation on projected picture can realize interaction with projection device.
In actual use, user carries out touch control operation on projected picture, in this way, projection device can be according to preparatory The corresponding relationship for calibrating position between obtained projected picture and real screen content, by touch-control position of the user on projected picture The position of touch being mapped in real screen content is set, and then is responded.But in the prior art, user is being projected The positioning accuracy of touch point on picture is lower, leads to the response that mistake is carried out to the operation of user, and user experience is poor.
Summary of the invention
The many aspects of the application provide a kind of touch-control independent positioning method, projection device and storage medium based on projection, To improve the positioning accuracy to touch point on projected picture, so that the operation to user is correctly responded, user is improved Experience.
The embodiment of the present application provides a kind of touch-control independent positioning method based on projection, is suitable for projection device, comprising:
Obtain the first depth image and the first interaction body when the first interaction body is not interacted with the first projected picture The second depth image when interacting with first projected picture, first projected picture project shape by the projection device At;
According to the depth difference between first depth image and second depth image, depth difference figure is obtained Picture;
From the depth difference image, identify the depth value of pixel within the scope of preset first depth value extremely A few target connected region, at least one described target connected region interact body in first projected picture with described first At least one touch point of upper generation is corresponding;
According to pixel of the depth value within the scope of preset second depth value at least one described target connected region Position coordinates, determine the position of at least one touch point respectively;Wherein, the second depth value range is contained in described First depth value range.
The embodiment of the present application also provides a kind of projection device, comprising: memory, projecting subassembly, depth of field sensor and place Manage device;Wherein,
The projecting subassembly, for projecting the first projected picture;
The depth of field sensor, for acquiring first depth of field when the first interaction body is not interacted with first projected picture The second depth image when image and the first interaction body are interacted with first projected picture;
The memory, for storing computer program;
The processor is coupled to the memory, for executing the computer program to be used for:
Obtain the first depth image and the first interaction body when the first interaction body is not interacted with the first projected picture The second depth image when interacting with first projected picture, first projected picture project shape by the projection device At;
According to the depth difference between first depth image and second depth image, depth difference figure is obtained Picture;
From the depth difference image, identify the depth value of pixel within the scope of preset first depth value extremely A few target connected region, at least one described target connected region interact body in first projected picture with described first At least one touch point of upper generation is corresponding;
According to pixel of the depth value within the scope of preset second depth value at least one described target connected region Position coordinates, determine the position of at least one touch point respectively;Wherein, the second depth value range is contained in described First depth value range.
The embodiment of the present application also provides a kind of computer readable storage medium for being stored with computer instruction, when the calculating When machine instruction is executed by one or more processors, one or more of processors is caused to execute the above-mentioned touch-control based on projection Step in independent positioning method.
In the embodiment of the present application, in the depth of the depth image according to interaction body before and after being interacted with projected picture Difference, determination interact body when interacting with projected picture at the touch point position on projected picture, first from depth difference figure As in, determine connected region of the depth value of pixel within the scope of certain depth, these connected regions respectively with touch point It is corresponding;Further, in the connected region determined, depth bounds are further reduced, determination is being contracted in each connected region The pixel in depth bounds after small, and then determine according to these pixels the position of touch point, the touching determined can be made The physical location that the position of point is interacted closer to interaction body with projected picture is controlled, and then can be improved to touch point in projected picture On positioning accuracy further the operation of user is rung according to the position of the touch point determined on projected picture At once, the accuracy of response can be improved, and then improve user experience.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present application, constitutes part of this application, this Shen Illustrative embodiments and their description please are not constituted an undue limitation on the present application for explaining the application.In the accompanying drawings:
Fig. 1 a is a kind of flow diagram of the touch-control independent positioning method based on projection provided by the embodiments of the present application;
Fig. 1 b is a kind of schematic diagram of the effective coverage of depth image provided by the embodiments of the present application;
Fig. 1 c is a kind of binary image provided by the embodiments of the present application;
Fig. 1 d is a kind of depth difference image provided by the embodiments of the present application;
Fig. 1 e is the target connected region schematic diagram in a kind of depth difference image provided by the embodiments of the present application;
Fig. 1 f is the touch point position view in a kind of depth difference image provided by the embodiments of the present application;
Fig. 2 is a kind of structural schematic diagram of projection device provided by the embodiments of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with the application specific embodiment and Technical scheme is clearly and completely described in corresponding attached drawing.Obviously, described embodiment is only the application one Section Example, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall in the protection scope of this application.
In the prior art, the positioning accuracy of the touch point to user on projected picture is lower, leads to the operation to user The response of mistake is carried out, user experience is poor, and for the technical problem, the embodiment of the present application provides a solution, substantially Thinking is:, in the depth difference for the depth image for interacting front and back with projected picture, determining that interaction body exists according to interaction body When interacting with projected picture at the touch point position on projected picture, first from depth difference image, pixel is determined Connected region of the depth value within the scope of certain depth, these connected regions are corresponding with touch point respectively;Further, in determination In connected region out, depth bounds are further reduced, are determined in the depth bounds after diminution in each connected region Pixel, and then according to the position for determining touch point according to these pixels, the position for the touch point determined can be made, the position The physical location interacted closer to interaction body with projected picture is set, and then the positioning to touch point on projected picture can be improved Precision when being responded in the position on projected picture to the operation of user according to the touch point determined, can further mention The accuracy of height response, and then improve user experience.
Below in conjunction with attached drawing, the technical scheme provided by various embodiments of the present application will be described in detail.
Fig. 1 a is a kind of flow diagram of the touch-control independent positioning method based on projection provided by the embodiments of the present application.The party Method is suitable for projection device.Wherein projection device can be realized as home equipments such as projection sound equipment, projection lamp, projectors.Such as Fig. 1 a It is shown, this method comprises:
101, the first depth image and the first interaction body when the first interaction body is not interacted with the first projected picture are obtained The second depth image when being interacted with the first projected picture, wherein the first projected picture is projected by projection device to be formed.
102, according to the depth difference between the first depth image and the second depth image, depth difference image is obtained.
103, from depth difference image, identify the depth value of pixel within the scope of preset first depth value extremely A few target connected region, at least one target connected region interact body on the first projected picture with first and generate at least One touch point is corresponding.
104, according to pixel of the depth value within the scope of preset second depth value at least one target connected region Position coordinates, determine the position of at least one touch point respectively;Wherein, the second depth value range is contained in the first depth value model It encloses.
In the present embodiment, to the projection pattern of projection device without limiting.Projected picture can be hung down by projection device It delivers directly and penetrates to be formed, can also be projected and be formed by projection device floor projection or oblique projection, but not limited to this.
In the present embodiment, depth of field mould group is provided on projection device, which can be range sensor, such as Infrared distance sensor, laser range sensor, depth of field camera etc., but not limited to this.Depth of field mould group can acquire it and can acquire Depth image in range.The depth value of each pixel on depth image react the pixel relative to depth of field mould group away from From i.e. its distance relative to projection device.
In the present embodiment, in order to determine interactive body and projected picture touch point location information, in a step 101, The depth image of the projected picture of depth of field mould group acquired projections equipment projection on a projection plane.For ease of description and distinguish, In the embodiment of the present application, the depth image of the projected picture by the projection of collected projection device on a projection plane, definition For the first depth image, and projected picture at this time is defined as the first projected picture.First projected picture is thrown by projected picture It penetrates to be formed.In addition, for ease of description and distinguish, by interaction body herein be defined as first interaction body.Wherein, the first interaction body It can be finger, stylus, mechanical arm, the small sticks etc. of user, in the embodiment of the present application without limiting.
Further, in a step 101, depth of field mould group also acquires scape when interacting on the first interaction body and the first projected picture Deep image, and the depth image is defined as the second depth image.
Since each pixel on depth image reacts distance of the pixel relative to depth of field mould group, i.e., it is relative to throwing The distance of shadow equipment.When the first interaction body protrudes into the interaction of the first projected picture, first interaction body phase at a distance from projection device It is close at a distance from projection device for projection plane, therefore the second depth image is compared with the first depth image, it can be true Region of the fixed first interaction body on the first projected picture.
Protrude into projected picture in view of interaction body generally tilts and carry out touch-control, thus the depth information of touch point and other Partial depth information is different, with the depth difference of corresponding pixel on the first depth image, the depth difference put with other Value also there is difference.Further, since there are certain volumes for interaction body, the touch point region with projected picture can Form connected region.Therefore, in a step 102, can be obtained according to the depth difference of the first depth image and the second depth image Depth difference image.In turn, in step 103, from depth difference image, identify the depth value of pixel preset At least one target connected region in one depth bounds, wherein each target connected region interacts body with first and throws first The touch point generated on shadow picture is corresponding.This is because for depth difference image, it is understood that there may be depth difference position Noise within the scope of the first depth difference, these noises may be noise isolated one by one, and using connected region as touching The corresponding region for controlling point, can reject noise individual on depth difference image.
It wherein, can be according to the difference of interaction body, the preset first depth value range of flexible setting, for example, for interaction body It is the finger of user, determination is position of the finger tripe on projected picture.According to the thickness of finger, the first depth value model can use Enclosing can be [1mm, 30mm] or [1mm, 25mm] etc..I.e. if permutoid is the finger of user, when the first depth value range is [1mm, 30mm], by the depth difference image of the first depth image and the second depth image, depth difference is greater than or equal to 1mm, and the pixel less than or equal to 30mm is formed by position of the region as finger tripe (target connected region).
Further, at step 104, the second depth value range is set, wherein the second depth value range is contained in above-mentioned the One depth value range.For at least one target connected region that step 103 identifies, from each target connected region, selection Pixel of the depth value of pixel within the scope of the second depth value, and according to these depth values within the scope of the second depth value Pixel determines the position of touch point respectively.Wherein, the corresponding touch point of a target connected region.In this way, determining Touch point position, closer to the practical contact position with projected picture of interaction body, and then can be improved and touch point is determined Position precision.For example, if interaction body is the finger of user, the position for the touch point determined, closer to the finger tip of user Position.If interaction body is stylus etc., the position for the touch-control determined, closer to the position etc. of pen tip.
It wherein, can be according to the difference of interaction body, the preset second depth value range of flexible setting, for example, for interaction body It is the finger of user, determination is position of the finger tripe on projected picture.According to the thickness of hand point, the first depth value model can use Enclosing can be [1mm, 5mm].I.e. if permutoid is the finger of user, by the depth difference of the first depth image and the second depth image It is worth in the target connected region in image, depth difference is greater than or equal to 1mm, and the pixel less than or equal to 5mm is formed Position of the region as finger tip.
In the present embodiment, in the depth difference of the depth image according to interaction body before and after being interacted with projected picture Value, determination interact body when interacting with projected picture at the touch point position on projected picture, first from depth difference image In, determine connected region of the depth value of pixel within the scope of certain depth, these connected regions respectively with touch point pair It answers;Further, in the connected region determined, depth bounds are further reduced, determination is being reduced in each connected region The pixel in depth bounds afterwards, and then determine according to these pixels the position of touch point, the touch-control determined can be made The physical location that the position of point is interacted closer to interaction body with projected picture, and then can be improved to touch point on projected picture Positioning accuracy further the operation of user is responded according to the position of the touch point determined on projected picture When, the accuracy of response can be improved, and then improve user experience.
Further, it is contemplated that determining interaction body in the touch point on projected picture, need to use is depth image Effective coverage where middle projected picture, and collected depth image not only includes the depth image of projected picture, further includes The depth image of the background board of projected picture, such as the depth image of the other parts on projection plane in addition to projected picture Deng.In the present embodiment, the region where projected picture in depth image is defined as to the effective coverage of depth image.
In order to reduce calculation amount, accelerate locating speed, it can be just for being projected in the first depth image and the second depth image Effective coverage where picture carries out the extraction of depth difference image and the step of succeeding target connected region and touch-control point location Suddenly.Based on this, a kind of optional embodiment of step 102 are as follows: according to known N number of reference position determine the first depth image and Effective coverage in second depth image, and N number of reference position is used to determine the boundary of projected picture in depth image;To first Effective coverage in depth image and the second depth image takes difference, obtains depth difference image.Wherein the value of N can be according to throwing The shape of shadow picture carries out flexible setting, such as N can be the integer more than or equal to 4.For example, for the shadowgraph of quadrangle For face, N can be the integer more than or equal to 4.Wherein, as N=4,4 known reference positions can throw for quadrangle The position on a vertex of shadow picture.Further, for the projected picture of rectangular or square, 4 known reference positions It can also be the center etc. of the four edges of rectangular or square, but not limited to this.In another example for circular projected picture For, N can be the integer more than or equal to 4.For example, 4 known reference positions can be circular projection as N=4 4 Along ents of the circular boundary of picture, etc..
Now using depth image as rectangular image, projected picture is rectangle, and N=4,4 known reference positions are projection For 4 vertex of picture, the picture element matrix of effective coverage is illustrated.As shown in Figure 1 b, it is assumed that depth image Contain 640 pixels in the direction of the x axis, contains 480 pixels, the i.e. coordinate on the 4 of depth image vertex in y-direction Respectively (0,0), (680,0) (0,480) and (680,480), the corresponding picture element matrix of depth image are 640*480;And assume 4 The coordinate of a known reference position is (x1, y1), (x2, y1), (x1, y2), (x2, y2), wherein 0 < x1 < x2 < 640;0 < y1 < y2 < 480, then the picture element matrix of the effective coverage in depth image is (x2-x1+1) * (y2-y1+1), calculation amount Less than the calculation amount to entire depth image.
Further, N number of known reference position can be preset reference position, specified reference bit when can also be calibration It sets.When N number of known reference position is calibration when specified reference position, the is being determined according to known N number of reference position Before effective coverage in one depth image and the second depth image, can acquired projections picture not with interact when body interacts Depth image.It for ease of description and distinguishes, which is defined as third depth image;And it acquires and the second shadowgraph It is N number of with reference to depth image when the N number of reference position specified in face interacts respectively.It for ease of description and distinguishes, by this The projected picture at place is defined as the second projected picture, and the second projected picture can be identical as above-mentioned first projected picture, can also be different, It is not limited here;And interaction body herein is defined as the second interaction body, and it can be identical with above-mentioned first interaction body, it can also Difference, it is not limited here.
Further, according to third depth image and N number of depth difference with reference to depth image, N number of reference bit is calculated separately The location information set.
Optionally, N number of reference can be obtained according to third depth image and N number of depth difference with reference between depth image Depth difference image;And from each reference depth error image, identify the depth value of middle pixel preset respectively Reference target connected region within the scope of three depth values, wherein reference target connected region is corresponding with N number of reference position respectively; According to depth value in the reference target connected region in N number of reference depth error image within the scope of preset 4th depth value The position coordinates of pixel determine the location information of N number of reference position respectively, wherein the 4th depth value range is contained in third Depth value range.Determination for reference target connected region, reference can be made to following connected regions for including from depth difference image The middle description for selecting at least one target connected region, details are not described herein.
Wherein, third depth value range and the 4th depth value range can flexibly be set according to the difference of the second interaction body It sets, and when the second interaction body is identical as the first interaction body, setting third depth value range is identical as the first depth value range, and It is identical as the second depth value range that 4th depth value range is set.
Further, optionally, depth value in the reference target connected region in each reference depth error image can be calculated The HCCI combustion of pixel within the scope of preset 4th depth value, and each HCCI combustion is determined as a reference position Location information, and then obtain the location information of N number of reference position.
Further, in identification depth difference image, at least one target connection of the depth value in the first depth bounds It, can be by means of binary image when region.That is, can by depth value in depth difference image picture within the scope of the first depth value The value of vegetarian refreshments is set as 1, and sets 0 for the value of rest of pixels point in depth difference image, to obtain binary image;Into The connected region that the value of pixel in binary image is 1 is mapped as the connected region in depth difference image by one step.
For example, if permutoid is the finger of user, when the first depth value range is [1mm, 30mm], by depth difference figure As in, depth difference is greater than or equal to 1mm, and the value of the pixel less than or equal to 30mm is set as 1, by depth difference figure The value of other pixels is set as 0 as in, obtains the binary image of the depth difference image as illustrated in figure 1 c.From Fig. 1 c In it can be concluded that, target connected region be A, B, C, D, that is, be formed by position of the region as finger tripe (target connected region) It sets.
Further, in order to avoid the corresponding connected region of noise in depth difference image also is counted, need by The connected region for the noise that connected region in depth difference image may include is rejected.It is considered that in depth difference image, The corresponding connected region of touch point of interaction body can also be connected with the depth information for interacting other positions of body.For example, interaction body For the finger of user, it further includes other finger-joints in addition to finger tripe that interaction body, which not only includes the finger tripe of user,.It is based on This, the depth information for the first interaction body that can include according to depth difference image, the connected region for including from depth difference image Middle at least one target connected region of selection.
Further, first interaction body corresponding region in depth difference image includes at least one target connected region The corresponding connected region of touch point and then can include according to depth difference image without including the corresponding connected region of noise The depth information of the interactive body determines candidate region of the first interaction body in depth difference image;And from depth difference figure In connected region as in, at least one is selected to be located at the connected region in candidate region, as at least one target connected region Domain.
For example, it is assumed that the first interaction body is the finger of user, then above-mentioned depth difference image is as shown in Figure 1 d.I.e. according to such as The depth information of hand body in depth difference image shown in Fig. 1 d, it may be determined that candidate regions of the body in depth difference image of selling Domain.Further, for binary image shown in Fig. 1 c, region E is noise region, then can be by the connected region in binary image Domain A, B, C, D, E are mapped in depth difference image, obtain depth difference image as shown in fig. le.
Further, from A, B, C, D, E, the connected region in the mapping area of candidate region is selected to connect as target Lead to region, i.e. region shown in A1, B1, C1, D1 in Fig. 1 e.
Further, at least one target connected region, depth value at least one target area can be calculated separately and existed The HCCI combustion of pixel within the scope of second depth value, and using HCCI combustion as the position of at least one touch point.
For example, for target connected region A1, B1, C1, D1 shown in Fig. 1 e, calculate separately target connected region A1, B1, The HCCI combustion of pixel of the depth value in [1mm, 5mm] in C1, D1, and using these HCCI combustions as finger in shadowgraph The position of touch point on face, the i.e. position as shown in 1e orbicular spot.Wherein, the position as shown in Fig. 1 f intermediate cam shape, is right Each pixel of target connected region A1, B1, C1, D1 are directly averaged coordinate, obtained location information.It can be obtained from Fig. 1 f, It directly averages the location information that coordinate obtains to target connected region, is partial to the middle section of finger;And by target connected region The HCCI combustion of pixel of the middle depth value within the scope of the second depth value, generates on projected picture as the first permutoid The position of at least one touch point, closer to the position of finger tip, i.e., the touch-control that is generated on projected picture closer to finger The physical location of point.That is, by the mean value of pixel of the depth value within the scope of the second depth value in target connected region Coordinate can further improve as the position at least one touch point that the first permutoid generates on projected picture to touch-control The precision of point location.In turn, the position for the touch point determined in subsequent basis, when being responded to the touch control operation of user, Correct content can be shown to user, improve user experience.
It should be noted that the executing subject of each step of above-described embodiment institute providing method may each be same equipment, Alternatively, this method is also by distinct device as executing subject.For example, the executing subject of step 101-104 can be equipment A;Again For example, the executing subject of step 101 can be equipment A, the executing subject of step 102 can be equipment B;Etc..
In addition, containing in some processes of the description in above-described embodiment and attached drawing according to particular order appearance Multiple operations, but it should be clearly understood that these operations can not execute or parallel according to its sequence what appears in this article It executes, serial number of operation such as 101,102 etc. is only used for distinguishing each different operation, and serial number itself does not represent any Execute sequence.In addition, these processes may include more or fewer operations, and these operations can execute in order or It is parallel to execute.
Correspondingly, the embodiment of the present application also provides a kind of computer readable storage medium for being stored with computer instruction.When When computer instruction is executed by one or more processors, one or more of processors are caused to execute above-mentioned based on projection Step in touch-control independent positioning method.
Fig. 2 is a kind of structural schematic diagram of projection device provided by the embodiments of the present application.Wherein, projection device, which can be realized, is Projection sound equipment, projection lamp, projector etc..As shown in Fig. 2, projection device includes: memory 20a, processor 20b, projective module group 20c and depth of field mould group 20d.
Wherein, projective module group 20c, for projecting the first projected picture.
Depth of field mould group 20d, is arranged in the front surface of projection device main body, does not project with first for acquiring the first interaction body The second depth image when the first depth image and the first interaction body when picture interaction are interacted with the first projected picture.Separately Outside, depth of field mould group 20d may include infrared distance sensor, the equidistant sensor of laser range sensor, but not limited to this.
Wherein, memory 20a for storing computer program, and can be configured to store various other data to support Operation on projection device.Wherein, the computer program stored in memory 20a can be performed in processor 20b, corresponding to realize Control logic.Memory 20a can realize by any kind of volatibility or non-volatile memory device or their combination, Such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable is read-only Memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk Or CD.
Processor 20b is coupled to memory 20a, for executing above-mentioned computer program to be used for: according to the first depth map Depth difference between picture and the second depth image obtains depth difference image;From depth difference image, pixel is identified At least one target connected region within the scope of preset first depth value of depth value, at least one target connected region with At least one touch point that first interaction body generates on the first projected picture is corresponding;According at least one target connected region The position coordinates of pixel of the depth value within the scope of preset second depth value, determine the position of at least one touch point respectively It sets;Wherein, the second depth value range is contained in the first depth value range.
In an alternative embodiment, processor 20b is specifically used for: when obtaining depth difference image according to known N number of Reference position determines the effective coverage in the first depth image and the second depth image;N number of reference position is for determining depth map The boundary of projected picture as in, and N is the integer more than or equal to 4;To having in the first depth image and the second depth image Effect region takes difference, obtains depth difference image.
Optionally, projective module group 20c is also used to project the second projected picture.Correspondingly, projective module group 20d is also used to: being adopted Collect third depth image and second interaction body and second shadowgraph of the second interaction body when not interacting with the second projected picture It is N number of with reference to depth image when the N number of reference position specified in face interacts respectively.
Correspondingly, processor 20b is determining the first depth image and the second depth image according to known N number of reference position In effective coverage before, be also used to: according to third depth image and N number of depth difference with reference to depth image, calculating separately N The location information of a reference position.
Further, processor 20b is specifically used for: when calculating the location information of N number of reference position according to third depth map Picture and N number of depth difference with reference between depth image, obtain N number of reference depth error image;From N number of reference depth differential chart As in, reference target connected region of the depth value of middle pixel within the scope of preset third depth value is identified respectively, In, reference target connected region is corresponding with N number of reference position respectively;According to the reference target in N number of reference depth error image The position coordinates of pixel of the depth value within the scope of preset 4th depth value, determine N number of reference bit respectively in connected region The location information set;4th depth value range is contained in third depth value range.
In other alternative embodiments, processor 20b identifies the depth value of pixel from depth difference image When presetting at least one target connected region within the scope of the first depth value, it is specifically used for: by depth in depth difference image The value of pixel of the value within the scope of the first depth value is set as 1, and the value of rest of pixels point in depth difference image is set It is set to 0, to obtain binary image;The connected region that the value of pixel in binary image is 1 is mapped as depth difference Connected region in image;According to the depth information for the first interaction body that depth difference image includes, from depth difference image packet At least one target connected region is selected in the connected region contained.
Further, processor 20b is selecting at least one target to be connected to from the connected region that depth difference image includes When region, it is specifically used for: according to the depth information for the first interaction body that depth difference image includes, determines the first interaction body in depth Spend the candidate region in error image;From the connected region in depth difference image, at least one is selected to be located at candidate region In connected region, as at least one target connected region.
In a further alternative embodiment, processor 20b is specifically used at the position for determining at least one touch point: point The HCCI combustion of pixel of the depth value within the scope of the second depth value at least one target connected region is not calculated, and will be equal It is worth position of the coordinate as at least one touch point.
In some embodiments, projection device further includes communication component 20e.Communication component 20e is configured to facilitate projection The communication of wired or wireless way between equipment and other equipment.Projection device can access the wireless network based on communication standard Network, such as WiFi, 2G or 3G or their combination.In one exemplary embodiment, communication component comes via broadcast channel reception From the broadcast singal or broadcast related information of external broadcasting management system.In one exemplary embodiment, the communication component Near-field communication (NFC) module, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be also based on (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In further embodiments, projection device further includes power supply module 20f.Power supply module 20f is configured as projection and sets Standby various assemblies provide electric power.Power supply module 20f may include power-supply management system, one or more power supplys and other with The equipment where power supply module generates, managees, and distributes the associated component of electric power.
In some embodiments, projection device may also include voice input/output unit 20g can be configured to output and/ Or input audio signal, such as projection sound equipment etc..For example, voice input/output unit 20g includes a microphone (MIC), when Equipment where audio component is in operation mode, and when such as call mode, recording mode, and voice recognition mode, microphone is configured To receive external audio signal.The received audio signal can be further stored in memory or via communication component 20e It sends.In some embodiments, audio component further includes a loudspeaker, is used for output audio signal.For example, for language It says the projection device of interactive function, can realize the interactive voice etc. with user by voice input/output unit 20g.
Correspondingly, projection device may also include sound processing unit 20h, for inputting to voice input/output unit 20g Or the voice signal of output is handled.
In some embodiments, projection device further include: display 20i.Display 20i may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen may be implemented as touch screen, be come with receiving From the input signal of user.Touch panel includes one or more touch sensors to sense on touch, slide, and touch panel Gesture.The touch sensor can not only sense the boundary of a touch or slide action, but also detect with the touch or The relevant duration and pressure of slide.
Correspondingly, projection device may also include image processing unit 20j, for executing signal processing, such as with from processing The relevant image quality correction of picture signal of device 20b output, and by its conversion of resolution for according to the screen of display 20i Resolution ratio.Then, display driver element 20k successively selects every row pixel of display 20i, and successively scans display line by line Every row pixel of 20i, thus the picture element signal based on the picture signal through signal processing is provided.
It should be noted that only schematically providing members in Fig. 2, it is not meant to that projection device must include Fig. 2 Shown all components do not mean that projection device can only include component shown in Fig. 2 yet.In addition, being thrown in addition to the component shown in Fig. 2 Shadow equipment also inputs operating unit (being not shown in Fig. 2).Wherein, input operating unit includes that at least one is used to execute input behaviour The operating member of work, such as key, button, switch or other components with similar functions are received by operating member and are used Family instruction, and to processor 20b output order.Optionally, projection device can also include bracket, fixation according to application demand Platform etc. is used to fix the component of projection device.
Projection device provided in this embodiment, in the depth image according to interaction body before and after being interacted with projected picture Depth difference, determine interaction body when being interacted with projected picture at the touch point position on projected picture, first from depth In error image, determine connected region of the depth value of pixel within the scope of certain depth, these connected regions respectively with Touch point is corresponding;Further, in the connected region determined, depth bounds are further reduced, in each connected region really The pixel being scheduled in the depth bounds after reducing, and then determine according to these pixels the position of touch point, it can make to determine The physical location that the position of touch point out is interacted closer to interaction body with projected picture, and then can be improved and touch point is being thrown Positioning accuracy on shadow picture, further, in operation of the position to user according to the touch point determined on projected picture When being responded, the accuracy of response can be improved, and then improve user experience.
It should be noted that the description such as " first " herein, " second ", is for distinguishing different message, equipment, mould Block etc. does not represent sequencing, does not also limit " first " and " second " and is different type.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM), Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want There is also other identical elements in the process, method of element, commodity or equipment.
The above description is only an example of the present application, is not intended to limit this application.For those skilled in the art For, various changes and changes are possible in this application.All any modifications made within the spirit and principles of the present application are equal Replacement, improvement etc., should be included within the scope of the claims of this application.

Claims (10)

1. a kind of touch-control independent positioning method based on projection is suitable for projection device characterized by comprising
The first depth image and the first interaction body and institute when acquisition the first interaction body is not interacted with the first projected picture The second depth image when the interaction of the first projected picture is stated, first projected picture is projected by the projection device to be formed;
According to the depth difference between first depth image and second depth image, depth difference image is obtained;
From the depth difference image, at least one of the depth value of pixel within the scope of preset first depth value is identified A target connected region, at least one described target connected region interact body on first projected picture with described first and produce At least one raw touch point is corresponding;
According to the position of pixel of the depth value within the scope of preset second depth value at least one described target connected region Coordinate is set, determines the position of at least one touch point respectively;Wherein, the second depth value range is contained in described first Depth value range.
2. the method according to claim 1, wherein described according to first depth image and second scape Depth difference between deep image obtains depth difference image, comprising:
The effective coverage in first depth image and second depth image is determined according to known N number of reference position; N number of reference position is used to determine the boundary of projected picture in depth image, and N is the integer more than or equal to 4;
Difference is taken to the effective coverage in first depth image and second depth image, obtains the depth difference figure Picture.
3. according to the method described in claim 2, it is characterized in that, identifying pixel from the depth difference image At least one the target connected region of depth value within the scope of default first depth value, comprising:
1 is set by the value of pixel of the depth value within the scope of first depth value in the depth difference image, and 0 is set by the value of rest of pixels point in the depth difference image, to obtain binary image;
The connected region that the value of pixel in the binary image is 1 is mapped as the connection in the depth difference image Region;
According to the depth information for the first interaction body that the depth difference image includes, include from the depth difference image Connected region in select at least one target connected region.
4. according to the method described in claim 3, it is characterized in that, include according to the depth difference image described The depth information of one interaction body, selects at least one target connected region from the connected region that the depth difference image includes Domain, comprising:
According to the depth information for the first interaction body that the depth difference image includes, determine the first interaction body in institute State the candidate region in depth difference image;
From the connected region in the depth difference image, at least one is selected to be located at the connected region in the candidate region Domain, as at least one described target connected region.
5. according to the method described in claim 2, it is characterized in that, determining described first according to known N number of reference position Before effective coverage in depth image and second depth image, further includes:
The second third depth image of the interaction body when interact with the second projected picture of acquisition and the second interaction body with It is N number of with reference to depth image when the N number of reference position specified in second projected picture interacts respectively;
According to the third depth image and N number of depth difference with reference to depth image, N number of reference bit is calculated separately The location information set.
6. according to the method described in claim 5, it is characterized in that, described according to the third depth image and N number of ginseng The depth difference for examining depth image calculates separately the location information of N number of reference position, comprising:
According to the third depth image and N number of depth difference with reference between depth image, it is poor to obtain N number of reference depth It is worth image;
From N number of reference depth error image, identify the depth value of middle pixel in preset third depth value respectively Reference target connected region in range, wherein the reference target connected region is corresponding with N number of reference position respectively;
According to depth value in the reference target connected region in N number of reference depth error image in preset 4th depth value The position coordinates of pixel in range determine the location information of N number of reference position respectively;The 4th depth value range It is contained in the third depth value range.
7. method according to claim 1-6, which is characterized in that the connection of at least one target according to The position coordinates of pixel of the depth value within the scope of preset second depth value in region determine at least one described touching respectively Control the position of point, comprising:
Calculate separately pixel of the depth value within the scope of second depth value at least one described target connected region HCCI combustion, and using the HCCI combustion as the position of at least one touch point.
8. a kind of projection device characterized by comprising memory, projecting subassembly, depth of field sensor and processor;Wherein,
The projecting subassembly, for projecting the first projected picture;
The depth of field sensor, for acquiring the first depth image when the first interaction body is not interacted with first projected picture And second depth image of the first interaction body when being interacted with first projected picture;
The memory, for storing computer program;
The processor is coupled to the memory, for executing the computer program to be used for:
According to the depth difference between first depth image and second depth image, depth difference image is obtained;
From the depth difference image, at least one of the depth value of pixel within the scope of preset first depth value is identified A target connected region, at least one described target connected region interact body on first projected picture with described first and produce At least one raw touch point is corresponding;
According to the position of pixel of the depth value within the scope of preset second depth value at least one described target connected region Coordinate is set, determines the position of at least one touch point respectively;Wherein, the second depth value range is contained in described first Depth value range.
9. projection device according to claim 8, which is characterized in that the processor is from the depth difference image In, it is specific to use when identifying at least one the target connected region of the depth value of pixel within the scope of default first depth value In:
1 is set by the value of pixel of the depth value within the scope of first depth value in the depth difference image, and 0 is set by the value of rest of pixels point in the depth difference image, to obtain binary image;
The connected region that the value of pixel in the binary image is 1 is mapped as the connection in the depth difference image Region;
According to the depth information for the first interaction body that the depth difference image includes, include from the depth difference image Connected region in select at least one target connected region.
10. a kind of computer readable storage medium for being stored with computer instruction, which is characterized in that when the computer instruction quilt When one or more processors execute, one or more of processor perform claims is caused to require any one of 1-7 the method In step.
CN201811563670.9A 2018-12-20 2018-12-20 Touch-control independent positioning method, projection device and storage medium based on projection Pending CN109660779A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811563670.9A CN109660779A (en) 2018-12-20 2018-12-20 Touch-control independent positioning method, projection device and storage medium based on projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811563670.9A CN109660779A (en) 2018-12-20 2018-12-20 Touch-control independent positioning method, projection device and storage medium based on projection

Publications (1)

Publication Number Publication Date
CN109660779A true CN109660779A (en) 2019-04-19

Family

ID=66115462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811563670.9A Pending CN109660779A (en) 2018-12-20 2018-12-20 Touch-control independent positioning method, projection device and storage medium based on projection

Country Status (1)

Country Link
CN (1) CN109660779A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966235A (en) * 2020-07-28 2020-11-20 锐达互动科技股份有限公司 Method, device, equipment and medium for realizing touch projection product without repositioning
CN112732162A (en) * 2021-03-30 2021-04-30 北京芯海视界三维科技有限公司 Projection interaction method, device and system and computer storage medium
CN114827561A (en) * 2022-03-07 2022-07-29 成都极米科技股份有限公司 Projection control method, projection control device, computer equipment and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017130504A1 (en) * 2016-01-25 2017-08-03 裕行 池田 Image projection device
CN107818584A (en) * 2017-09-27 2018-03-20 歌尔科技有限公司 Determination method and device, projecting apparatus, the optical projection system of user's finger positional information
CN108227919A (en) * 2017-12-22 2018-06-29 潍坊歌尔电子有限公司 Determining method and device, projecting apparatus, the optical projection system of user's finger location information
CN108337494A (en) * 2018-05-18 2018-07-27 歌尔科技有限公司 A kind of calibration method of projection device, device, projection device and terminal device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017130504A1 (en) * 2016-01-25 2017-08-03 裕行 池田 Image projection device
CN107818584A (en) * 2017-09-27 2018-03-20 歌尔科技有限公司 Determination method and device, projecting apparatus, the optical projection system of user's finger positional information
CN108227919A (en) * 2017-12-22 2018-06-29 潍坊歌尔电子有限公司 Determining method and device, projecting apparatus, the optical projection system of user's finger location information
CN108337494A (en) * 2018-05-18 2018-07-27 歌尔科技有限公司 A kind of calibration method of projection device, device, projection device and terminal device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966235A (en) * 2020-07-28 2020-11-20 锐达互动科技股份有限公司 Method, device, equipment and medium for realizing touch projection product without repositioning
CN112732162A (en) * 2021-03-30 2021-04-30 北京芯海视界三维科技有限公司 Projection interaction method, device and system and computer storage medium
CN114827561A (en) * 2022-03-07 2022-07-29 成都极米科技股份有限公司 Projection control method, projection control device, computer equipment and computer-readable storage medium
CN114827561B (en) * 2022-03-07 2023-03-28 成都极米科技股份有限公司 Projection control method, projection control device, computer equipment and computer-readable storage medium

Similar Documents

Publication Publication Date Title
JP6033502B2 (en) Touch input control method, touch input control device, program, and recording medium
US11237703B2 (en) Method for user-operation mode selection and terminals
CN105472469B (en) Video playing progress adjustment method and device
CN109660779A (en) Touch-control independent positioning method, projection device and storage medium based on projection
US10205817B2 (en) Method, device and storage medium for controlling screen state
CN105718056B (en) Gesture identification method and device
US11222223B2 (en) Collecting fingerprints
CN108307308B (en) Positioning method, device and storage medium for wireless local area network equipment
EP3327548A1 (en) Method, device and terminal for processing live shows
EP3179749A1 (en) Device displaying method and apparatus, computer program and recording medium
CN107092359A (en) Virtual reality visual angle method for relocating, device and terminal
KR20170072164A (en) Method, device and terminal for optimizing air mouse remote control
US11228860B2 (en) Terminal positioning method, apparatus, electronic device and storage medium
CN104881342B (en) terminal test method and device
US20180288566A1 (en) Method for Triggering Operation and Portable Electronic Device
CN107272896A (en) The method and device switched between VR patterns and non-VR patterns
KR20150110319A (en) Method and device for displaying image
CN110634488A (en) Information processing method, device and system and storage medium
CN110110315A (en) Pending item management method and device
JP2014132463A (en) Display device, input device and coordinate correction method of display device and input device
EP3246805B1 (en) Gesture operation response method and device
CN105487774A (en) Image grouping method and device
CN106291630B (en) Drift data modification method and device
CN109600594B (en) Projection-based touch point positioning method, projection equipment and storage medium
CN109725804B (en) Projection-based track identification method, projection equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190419