CN104571512B - A kind of method and apparatus for assisting multiple projection ends to be projected - Google Patents
A kind of method and apparatus for assisting multiple projection ends to be projected Download PDFInfo
- Publication number
- CN104571512B CN104571512B CN201410842843.6A CN201410842843A CN104571512B CN 104571512 B CN104571512 B CN 104571512B CN 201410842843 A CN201410842843 A CN 201410842843A CN 104571512 B CN104571512 B CN 104571512B
- Authority
- CN
- China
- Prior art keywords
- information
- scanning
- projection
- processing unit
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000000007 visual effect Effects 0.000 claims description 18
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000010408 sweeping Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1407—General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
It is an object of the invention to provide a kind of method and apparatus for assisting multiple projection ends to be projected.The method according to the invention comprises the following steps:Obtain information to be output;According to the information to be output, determine respectively in the information to be output, the corresponding part output information of one or more of end projection end difference is projected with the multiple;Corresponding part output information is transmitted respectively to one or more of projection ends, so that each projection end group is projected in part corresponding with itself output information.
Description
Technical field
The present invention relates to field of computer technology, more particularly to a kind of method for assisting multiple projection ends to be projected and dress
Put.
Background technology
In the prior art, when being projected, projector equipment is typically connected to other equipment, so that projector equipment is based on
Information to be output from other equipment exports corresponding image.For example, common projecting apparatus is connected to notebook computer,
So as to project the interface content of the notebook computer on curtain.Need to use multiple equipment to realize, significantly limit
The occasion and mode that user is projected.
The content of the invention
It is an object of the invention to provide a kind of method and apparatus for assisting multiple projection ends to be projected.
According to an aspect of the invention, there is provided a kind of method for assisting multiple projection ends to be projected, wherein described
Multiple projection ends are corresponding with processing unit, wherein, it the described method comprises the following steps:
A obtains information to be output;
B is determined in the information to be output, with one in the multiple projection end respectively according to the information to be output
Or the part output information that multiple projection ends difference is corresponding;
C to it is one or more of projection ends transmit corresponding part output information respectively, for it is each projection end group in
Part corresponding with itself output information is projected.
According to an aspect of the present invention, a kind of processing unit for assisting multiple projection ends to be projected is additionally provided, its
Described in it is multiple projection ends it is corresponding with processing unit, wherein, the processing unit includes:
Acquisition device, for obtaining information to be output;
Determining device, for according to the information to be output, determining respectively in the information to be output, with the multiple throwing
The corresponding part output information of one or more of shadow end projection end difference;
Transmitting device, for transmitting corresponding part output information respectively to one or more of projection ends, for each
Individual projection end group is projected in part corresponding with itself output information.
Compared with prior art, the present invention has advantages below:Information to be output for projection can be handled,
The partial information in information to be output is projected respectively by being contained in multiple projection ends of wearable device, and then is completed to this
The projection output of a part or whole part of information to be output so that user can have more excellent visual experience.Also, according to the present invention
Scheme scanning information can be transmitted between wearable device so that user can be based on receiving from other users
Scanning information, project virtual image corresponding with the other users so that two users can be based respectively on the virtual shadow of other side
It is interactive as carrying out, improve Consumer's Experience.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, of the invention is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 illustrates the method flow diagram projected according to a kind of multiple projection ends of assistance of the present invention;
Fig. 2 illustrates a kind of structure of processing unit for being used to assist multiple projection ends to be projected according to the present invention
Schematic diagram.
Same or analogous reference represents same or analogous part in accompanying drawing.
Embodiment
The present invention is described in further detail below in conjunction with the accompanying drawings.
Fig. 1 illustrates the method flow diagram projected according to a kind of multiple projection ends of assistance of the present invention.According to this
The method of invention includes step S1, step S2 and step S3.
Wherein, the method according to the invention by be contained in for processing unit projected, wearable device come
Realize.
Wherein, the wearable device includes but is not limited on various wearable and human body, comprising multiple projection ends
Equipment.For example, clothes, pieces of ornament accessories comprising the processing unit and its corresponding multiple projection ends etc..
Wherein, the processing unit can carry out numerical value meter automatically including a kind of according to the instruction for being previously set or storing
Calculate and/or the electronic equipment of information processing, its hardware include but is not limited to microprocessor, application specific integrated circuit (ASIC), can compiled
Journey gate array (FPGA), digital processing unit (DSP), embedded device etc..Preferably, the processing unit is executable is included but not
It is limited to such as image segmentation, image merging, 3D effect etc. image processing operations.
Wherein, the projection end includes the various devices projected.Preferably, the projection end includes that sky can be carried out
The holographic projector of gas projection.Preferably, the projection end is miniature on the wearable device surface including that can be distributed in
Holographic projector.
Preferably, the processing unit may be connected to each projection end that the wearable device is included, with Xiang Qichuan
The defeated output information for being used to project.
Preferably, the wearable device also includes multiple scanning ends.The scanning end includes can be used to obtain including all
The equipment of the scans content information of such as image information or video information visual information.For example, miniature video camera is first-class.
It is highly preferred that the scanning end also includes can be used for the equipment for obtaining scanning position information corresponding with its own.
For example, microsensor for obtaining three-dimensional coordinate etc..
Preferably, according to wearable device of the present invention can by network and other according to it is of the present invention can
Carry out data transmission between wearable device.
Wherein, the network residing for the wearable device include but is not limited to internet, wide area network, Metropolitan Area Network (MAN), LAN,
VPN etc..
It should be noted that the wearable device, projection end, scanning end and network are only for example, other are existing
Or wearable device, projection end, scanning end and the network being likely to occur from now on are such as applicable to the present invention, should also be included in this hair
Within bright protection domain, and it is incorporated herein by reference.
Reference picture 1, in step sl, processing unit obtain information to be output.
Wherein, the information to be output includes but is not limited to following any information:
1) view relevant information;The view relevant information include it is from other equipment, available for the information projected,
For example, image file, video file or mobile phone interface image in smart mobile phone etc..
2) scanning information.Wherein, the scanning information is included from one or more scannings corresponding with the processing unit
The scans content information at end, for example, corresponding with its own scan image information or scan video information that scanning end obtains.
Preferably, the scanning information may also include scanning position information corresponding with the scans content information.It is for example, three-dimensional
Coordinate information, two-dimensional coordinate information etc..
Then, in step s 2, processing unit determines in the information to be output according to the information to be output respectively,
The corresponding part output information of one or more of end projection end difference is projected with the multiple.
Wherein, processing unit determines in the information to be output, with the multiple throwing respectively according to the information to be output
The mode of the corresponding part output information of one or more of shadow end projection end difference includes but is not limited to following any:
1) when the information to be output includes view relevant information, processing unit is first determined in the multiple projection end,
For projecting one or more projection ends of the information to be output.Then, processing unit is based on predetermined division rule, by described in
Information to be output is divided into part output information corresponding with one or more of projection ends difference.
Preferably, the multiple projection end corresponds respectively to multiple regions of the wearable device.
Specifically, processing unit can be according to the view relevant information, it is determined that the projection for projecting the information to be output
Regional location residing for the quantity at end and each projection end.Then, processing unit is regarded based on predetermined division rule by described in
Figure relevant information is divided into some output informations and various pieces output information is distributed into identified each projection end.
Preferably, processing unit can also determine required projection end based on the attribute of the information to be output in itself
Quantity.
For example, when information to be output is image information, image information is divided into " height by pixel height that can be based on image information
Clear image " and " standard picture ", and the more projection end of usage quantity is determined to project " high-definition image ", and usage quantity is less
Projection end project " standard picture ".
According to the first example of invention, wearable device is an intelligent vest, and the intelligent vest includes a processing unit,
And 50 miniature holographic projectors for being uniformly distributed in vest front and side.Also, the intelligent vest is by network with using
Family A smart mobile phone is connected.When the processing unit in the intelligent vest is got from the smart mobile phone in step sl
During picture image_1, Pixel Information of the processing unit based on the image_1, it is high definition picture to judge picture image_1, and
All 50 miniature holographic projectors included using the intelligent vest are further determined that to project picture image_1.Then,
Processing unit, by performing image segmentation operations, picture image_1 is divided into according to area based on predetermined division rule
50 part output informations, and by from left to right, order from top to bottom, successively by 50 part output informations do not correspond to
50 miniature holographic projectors included in the intelligent vest.
2) when the information to be output includes one or more scanning informations, the step S2 includes step S201 (figures
Do not show) and step S202 (not shown).
In step s 201, processing unit selected from one or more scanning informations for projection at least one of
Scanning information.
Specifically, processing unit can be based on user's selection, or, based on default content selection mechanism, from described one
Or selected in multinomial scanning information for projection at least one of scanning information.
Preferably, the scans content packet identification information containing scanning end, it is true that processing unit can be based on the identification information
Scanning end corresponding to the fixed scans content information.
It is highly preferred that the scanning end identification information includes the numbering of each scanning end corresponding with the processing unit.
Preferably, processing unit first determines scanning projection pattern, then, based on identified scanning projection pattern, by institute
State at least one scanning information selected in one or more scanning informations for projection.
Wherein, the scanning projection pattern includes the pattern for being projected can be set.For example, when scanning information is corresponding
When multiple positions of user's body, scanning projection pattern may include front/back/lateral mode, for projection end group in
Scanning information corresponding to corresponding body part is projected.
Then, in step S202, processing unit is by the multiple projection end, selection is believed with least one described scanning
Every scanning information in breath respectively corresponding at least one projection end, using every scanning information as at least one
Project the part output information at corresponding each projection end in end.
According to the second example of the present invention, user A wears the wearable device Dev_1 of a clothes form, wherein, this can wear
Wear equipment Dev_1 and include processing unit Proc_1, processing unit Proc_1 is with being uniformly distributed in clothes front and side area
50 miniature holographic projectors in domain, and it is uniformly distributed in 100 miniature video cameras of clothes front, side and rear surface regions
It is corresponding, wherein each projecting apparatus is numbered to p_50 from p_1 in order, each minisize pick-up head in order from c_1 number to
c_100;User B wears the wearable device Dev_2 of style identical with Dev_1, and the wearable device includes processing unit Proc_
2, and miniature holographic projector p_1 corresponding with processing unit Proc_2 be to p_50, and it is corresponding with Proc_2 miniature
Camera c1 to c100.Wherein, the upper each projecting apparatus of the distribution mode of the upper each projecting apparatus of Dev_2 and camera and Dev_1 and
The distribution mode of camera is identical.
Wherein, Dev_1 includes the associative mode of two kinds of predetermined projecting apparatus and camera:In associative mode " face model "
Projecting apparatus p_x corresponds to camera c_x respectively, and projecting apparatus p_x corresponds to camera c_ (50+ respectively in associative mode " face model "
x);Wherein x is integer of the span in [1,50].
When wearable device Dev_1 processing unit Proc_1 is got from wearable device Dev_2 in step sl
Numbering when being c_1 to c_10 and c_51 to c_60 20 items scan video information, wherein numbering is sweeping for c_1 to c_10
The front surface region that camera corresponding to video information is located at user B is retouched, the scan video information that numbering is c_51 to c_60 is corresponding
Camera be located at user B back region.Processing unit Proc_1 prompting users A:" display front or the back side" and based on use
The operation of family selection " front " determines that the camera of the scanning information for projecting is numbered as c_1 to c_10.
Then, the processing unit Proc_1 in wearable device Dev_1 is based on the corresponding relation in face model, it is determined that with
In the scanning information c_1 to c_10 of projection, and the associative mode of predetermined projecting apparatus and camera, it is determined that by wearable device
The projecting apparatus that numbering in Dev_1 is p_1 to p_10 projects to scanning information c_1 to c_10 respectively.
Then, in step s3, processing unit is transmitted corresponding part to one or more of projection ends and exported respectively
Information, so that each projection end group is projected in part corresponding with itself output information.
Preferably, processing unit can determine the projection at one or more of projection ends based on default projection orientation information
Positional information.Wherein, the projection orientation information be used for indicate projected, by one or more of part output information institutes
The overall image of composition projects the positional information of the affiliated wearable device in end relative to the one or more.Wherein, the projection
The projected position information at end is used to indicate position of the part output information corresponding with the projection end relative to the wearable device
Information.
According to the preferred scheme of the present invention, wherein, processing unit can be based on using one or more of projection ends
User user's sight, come determine with the corresponding projected position information in one or more of projection ends, for each projection
The respective projected position information is held, pair part corresponding with itself output information projects.
Specifically, processing unit determines projection orientation information, and and then determine one according to user's sight of user
Or the projected position information at multiple projection ends, it is pair corresponding with itself for each projection respective projected position information in end
Part output information projected, to cause the image of corresponding projection end output to be located at the pre-determined bit relative with the user
Put.
For example, the position in the front of user's sight, or, with user's sight in anterior position of a certain angle etc..
Wherein, processing unit can be determined based on the direction at the projection end in one or more precalculated positions in wearable device
The direction of visual lines of user;Or can the predetermined set based on user, to determine direction of visual lines.For example, by positioned at front position
The direction at projection end is used as user's direction of visual lines after translating preset distance in vertical direction;In another example the throwing that user is preselected
The direction at shadow end is as direction of visual lines etc..
Then, processing unit transmits the projected position information to corresponding projection with corresponding part output information in the lump
End, so that each projection end group is projected in part corresponding with itself output information and projected position information.
Continue to illustrate foregoing First example, the projection orientation information that processing unit is set based on user A is " before just
Side ", it is determined that 50 miniature holographic projectors for projection are respective, projection corresponding to the projection orientation information " front "
Angle information.Then, processing unit transmits corresponding part output information and correspondingly to this 50 miniature holographic projectors respectively
Projection angle information, for the 50 miniature holographic projector be based on part corresponding with itself output information and projection angle letter
Breath is projected, so as to which picture image_1 complete image be presented in user A front.
According to a preferred embodiment of the invention, every scanning information includes scans content information and scanning
Positional information, the step S3 include step S301 (not shown).
In step S301, processing unit transmits corresponding part output letter respectively to one or more of projection ends
Breath, for scanning position information and scans content information of each projection end group in the output information of part corresponding with itself
Projected.
Preferably, processing unit can determine the projected position letter at one or more of projection ends based on scanning position information
Breath.Then, processing unit transmits the projected position information to corresponding projection end with corresponding part output information in the lump, with
Projected for each projection end group in part corresponding with itself output information and projected position information.
For example, when scanning information includes scan video information and corresponding three-dimensional coordinate information, processing unit can base
In three-dimensional coordinate information corresponding with every scan video information, it is determined that each throwing for projecting every scan video information
The projected position information at shadow end.Then, processing unit respectively to the corresponding scan video information of each projection end transmission and
Projected position information.It is each to project three-dimensional coordinate information of the end according to scan video information corresponding with itself, it is determined that to corresponding
The depth information that scan video information is projected, and based on the scan video information and its corresponding projected position information and
Depth information is projected, so as to be in now correspond to every scan video information and its three-dimensional coordinate letter in corresponding projected position
The stereopsis of breath.
Preferably, the method according to the invention also includes step S4 (not shown) and step S5 (not shown).
In step s 4, processing unit receives in multiple scanning ends corresponding with itself, one or more scannings
The scanning information at end.
In step s 5, processing unit sends received one or more scanning informations to another processing unit.
Preferably, the processing unit constantly performs above-mentioned steps S4 and step S5 based on predetermined time interval, for
Another processing unit is constantly projected based on the one or more scanning informations received.
For example, the scanning letter that first processing unit received once multiple scanning ends corresponding to itself every 0.1 second
Breath, and the scanning information is sent to second processing unit, to include the wearable device of second processing unit
Projector equipment can constantly be projected out first affiliated wearable device of processing unit user image.
The method according to the invention, the information to be output for projection can be handled, it is wearable by being contained in
Multiple projection ends of equipment project the partial information in information to be output respectively, and then complete the part to the information to be output
Or overall projection output so that user can have more excellent visual experience.Also, it be able to can worn according to the solution of the present invention
Wear and transmit scanning information between equipment so that user can based on the scanning information from other users received, project with
Virtual image corresponding to the other users so that the virtual image that two users can be based respectively on other side carries out interactive, lifting
Consumer's Experience.
Fig. 2 illustrates a kind of structure of processing unit for being used to assist multiple projection ends to be projected according to the present invention
Schematic diagram.Treatment in accordance with the present invention device includes:Acquisition device 1, determining device 2 and transmitting device 3.
Reference picture 2, acquisition device 1 obtain information to be output.
Wherein, the information to be output includes but is not limited to following any information:
1) view relevant information;The view relevant information include it is from other equipment, available for the information projected,
For example, image file, video file or mobile phone interface image in smart mobile phone etc..
2) scanning information.Wherein, the scanning information is included from one or more scannings corresponding with the processing unit
The scans content information at end, for example, corresponding with its own scan image information or scan video information that scanning end obtains.
Preferably, the scanning information may also include scanning position information corresponding with the scans content information.It is for example, three-dimensional
Coordinate information, two-dimensional coordinate information etc..
It is and the multiple it is then determined device 2 according to the information to be output, determines in the information to be output respectively
Project the corresponding part output information of one or more of end projection end difference.
It is and the multiple wherein it is determined that device 2 according to the information to be output, determines in the information to be output respectively
The mode for projecting the corresponding part output information of one or more of end projection end difference is including but not limited to following any
Kind:
1) when the information to be output includes view relevant information, the determining device 2 further comprises sub- determination dress
Put (not shown) and division device (not shown).
Sub- determining device is first determined in the multiple projection end, for projecting one or more projections of the information to be output
End.Then, device is divided based on predetermined division rule, and the information to be output is divided into and one or more of projections
Part output information corresponding to the difference of end.
Preferably, the multiple projection end corresponds respectively to multiple regions of the wearable device.
Specifically, sub- determining device can be according to the view relevant information, it is determined that the throwing for projecting the information to be output
Regional location residing for the quantity at shadow end and each projection end.Then, device is divided based on predetermined division rule, by described in
View relevant information is divided into some output informations and various pieces output information is distributed into identified each projection
End.
Preferably, sub- determining device can also determine required projection end based on the attribute of the information to be output in itself
Quantity.
For example, when information to be output is image information, image information is divided into " height by pixel height that can be based on image information
Clear image " and " standard picture ", and the more projection end of usage quantity is determined to project " high-definition image ", and usage quantity is less
Projection end project " standard picture ".
According to the first example of invention, wearable device is an intelligent vest, and the intelligent vest includes a processing unit,
And 50 miniature holographic projectors for being uniformly distributed in vest front and side.Also, the intelligent vest is by network with using
Family A smart mobile phone is connected.When the acquisition device 1 in the intelligent vest gets the picture image_ from the smart mobile phone
When 1, Pixel Information of the sub- determining device based on the image_1, it is high definition picture to judge picture image_1, and further really
Surely all 50 miniature holographic projectors for being included using the intelligent vest project picture image_1.Then, device is divided
Based on predetermined division rule, by performing image segmentation operations, picture image_1 is divided into 50 parts according to area
Output information, and by from left to right, order from top to bottom, 50 part output informations are not corresponded to the intelligence carried on the back successively
50 miniature holographic projectors included in the heart.
2) when the information to be output includes one or more scanning informations, the determining device 2 includes first choice
Device (not shown) and the second selection device (not shown).
First choice device selects at least one scanning information for projection from one or more scanning informations.
Specifically, first choice device can be based on user and select, or, based on default content selection mechanism, from described
At least one scanning information for projection is selected in one or more scanning informations.
Preferably, the scans content packet identification information containing scanning end, it is true that processing unit can be based on the identification information
Scanning end corresponding to the fixed scans content information.
It is highly preferred that the scanning end identification information includes the numbering of each scanning end corresponding with the processing unit.
Preferably, the processing unit also includes pattern determining device (not shown).Pattern determining device first determines to scan
Projection mode, then, first choice device scanning projection pattern based on determined by, by one or more scanning informations
Select at least one scanning information for projection.
Wherein, the scanning projection pattern includes the pattern for being projected can be set.For example, when scanning information is corresponding
When multiple positions of user's body, scanning projection pattern may include front/back/lateral mode, for projection end group in
Scanning information corresponding to corresponding body part is projected.
Then, the second selection device is by the multiple projection end, selection with it is each at least one described scanning information
At least one projection end corresponding to item scanning information difference, is projected in end using every scanning information as at least one
The part output information at corresponding each projection end.
According to the second example of the present invention, user A wears the wearable device Dev_1 of a clothes form, wherein, this can wear
Wear equipment Dev_1 and include processing unit Proc_1, processing unit Proc_1 is with being uniformly distributed in clothes front and side area
50 miniature holographic projectors in domain, and it is uniformly distributed in 100 miniature video cameras of clothes front, side and rear surface regions
It is corresponding, wherein each projecting apparatus is numbered to p_50 from p_1 in order, each minisize pick-up head in order from c_1 number to
c_100;User B wears the wearable device Dev_2 of style identical with Dev_1, and the wearable device includes processing unit Proc_
2, and miniature holographic projector p_1 corresponding with processing unit Proc_2 be to p_50, and it is corresponding with Proc_2 miniature
Camera c1 to c100.Wherein, the upper each projecting apparatus of the distribution mode of the upper each projecting apparatus of Dev_2 and camera and Dev_1 and
The distribution mode of camera is identical.
Wherein, Dev_1 includes the associative mode of two kinds of predetermined projecting apparatus and camera:In associative mode " face model "
Projecting apparatus p_x corresponds to camera c_x respectively, and projecting apparatus p_x corresponds to camera c_ (50+ respectively in associative mode " face model "
x);Wherein x is integer of the span in [1,50].
When wearable device Dev_1 processing unit Proc_1 acquisition device 1 is got from wearable device Dev_2
Numbering when being c_1 to c_10 and c_51 to c_60 20 items scan video information, wherein numbering is sweeping for c_1 to c_10
The front surface region that camera corresponding to video information is located at user B is retouched, the scan video information that numbering is c_51 to c_60 is corresponding
Camera be located at user B back region.Processing unit Proc_1 pattern determining device prompting user A:" display front or
The back side", first choice device is based on the camera numbering that user selects the operation in " front " to determine the scanning information for projecting
For c_1 to c_10.
Then, the second selection device in wearable device Dev_1 is based on the corresponding relation in face model, it is determined that being used for
The scanning information c_1 to c_10 of projection, and the associative mode of predetermined projecting apparatus and camera, it is determined that by wearable device
The projecting apparatus that numbering in Dev_1 is p_1 to p_10 projects to scanning information c_1 to c_10 respectively.
Then, transmitting device 3 transmits corresponding part output information respectively to one or more of projection ends, for each
Individual projection end group is projected in part corresponding with itself output information.
Preferably, processing unit can determine the projection at one or more of projection ends based on default projection orientation information
Positional information.Wherein, the projection orientation information be used for indicate projected, by one or more of part output information institutes
The overall image of composition projects the positional information of the affiliated wearable device in end relative to the one or more.Wherein, the projection
The projected position information at end is used to indicate position of the part output information corresponding with the projection end relative to the wearable device
Information.
According to the preferred scheme of the present invention, wherein, processing unit also includes sight determining device (not shown) and position
Put determining device (not shown).
Sight determining device determines the direction of visual lines of the user using the multiple projection end;Position determining means are based on institute
The direction of visual lines of user is stated, it is determined that projected position information corresponding with one or more of projection ends, for each projection end
The respective projected position information, pair part corresponding with itself output information project.
Specifically, position determining means are according to user's sight of user, determine projection orientation information, and and then described in determining
One or more projection ends projected position information, for it is each projection the respective projected position information in end, pair and itself
Corresponding part output information is projected, to cause the image of corresponding projection end output positioned at relative with the user pre-
Positioning is put.
For example, the position in the front of user's sight, or, with user's sight in anterior position of a certain angle etc..
Wherein, sight determining device can based in wearable device one or more precalculated positions projection end direction come
Determine the direction of visual lines of user;Or can the predetermined set based on user, to determine direction of visual lines.For example, front position will be located at
The direction at the projection end put is used as user's direction of visual lines after translating preset distance in vertical direction;In another example user is preselected
Projection end direction as direction of visual lines etc..
Then, the projected position information is transmitted to corresponding and thrown by transmitting device 3 in the lump with corresponding part output information
Shadow end, so that each projection end group is projected in part corresponding with itself output information and projected position information.
Continue to illustrate foregoing First example, the projection orientation information that processing unit is set based on user A is " before just
Side ", it is determined that 50 miniature holographic projectors for projection are respective, projection corresponding to the projection orientation information " front "
Angle information.Then, the transmitting device 3 in the smart mobile phone transmits corresponding part to this 50 miniature holographic projectors respectively
Output information and corresponding projection angle information, so that the 50 miniature holographic projector is based on part output letter corresponding with itself
Breath and projection angle information are projected, so as to which picture image_1 complete image be presented in user A front.
According to a preferred embodiment of the invention, every scanning information includes scans content information and scanning
Positional information, the transmitting device 3 transmits corresponding part output information respectively to one or more of projection ends, for each
Scanning position information and scans content information of the individual projection end group in the output information of part corresponding with itself are projected.
Preferably, processing unit can determine the projected position letter at one or more of projection ends based on scanning position information
Breath.Then, transmitting device 3 transmits the projected position information to corresponding projection end with corresponding part output information in the lump,
So that each projection end group is projected in part corresponding with itself output information and projected position information.
For example, when scanning information includes scan video information and corresponding three-dimensional coordinate information, processing unit can base
In three-dimensional coordinate information corresponding with every scan video information, it is determined that each throwing for projecting every scan video information
The projected position information at shadow end.Then, transmitting device 3 respectively to the corresponding scan video information of each projection end transmission and
Projected position information.It is each to project three-dimensional coordinate information of the end according to scan video information corresponding with itself, it is determined that to corresponding
The depth information that scan video information is projected, and based on the scan video information and its corresponding projected position information and
Depth information is projected, so as to be in now correspond to every scan video information and its three-dimensional coordinate letter in corresponding projected position
The stereopsis of breath.
Preferably, the method according to the invention also includes reception device (not shown) and dispensing device (not shown).
Reception device receives the scanning letter of in multiple scanning ends corresponding with itself, one or more scanning ends
Breath.
Dispensing device sends received one or more scanning informations to another processing unit.
Preferably, the reception device and dispensing device are constantly performed above-mentioned reception and come from based on predetermined time interval
The step of scanning information of scanning ends in multiple scanning ends corresponding with itself, one or more and by received one
Or multinomial scanning information the step of sending to another processing unit, so that another processing unit is constantly based on one received
Item or multinomial scanning information are projected.
For example, the reception device of first processing unit was every the multiple scannings received once corresponding to itself in 0.1 second
The scanning information at end, and sent the scanning information to second processing unit by dispensing device, to include this second
The projector equipment of the wearable device of processing unit can constantly be projected out first affiliated wearable device of processing unit
User image.
According to the solution of the present invention, the information to be output for projection can be handled, it is wearable by being contained in
Multiple projection ends of equipment project the partial information in information to be output respectively, and then complete the part to the information to be output
Or overall projection output so that user can have more excellent visual experience.Also, it be able to can worn according to the solution of the present invention
Wear and transmit scanning information between equipment so that user can based on the scanning information from other users received, project with
Virtual image corresponding to the other users so that the virtual image that two users can be based respectively on other side carries out interactive, lifting
Consumer's Experience.
The software program of the present invention can realize steps described above or function by computing device.Similarly, originally
The software program (including related data structure) of invention can be stored in computer readable recording medium storing program for performing, for example, RAM is deposited
Reservoir, magnetically or optically driver or floppy disc and similar devices.In addition, some steps or function of the present invention can employ hardware to reality
It is existing, for example, coordinating as with processor so as to perform the circuit of each function or step.
In addition, the part of the present invention can be applied to computer program product, such as computer program instructions, when its quilt
When computer performs, by the operation of the computer, the method according to the invention and/or technical scheme can be called or provided.
And the programmed instruction of the method for the present invention is called, it is possibly stored in fixed or moveable recording medium, and/or pass through
Broadcast or the data flow in other signal bearing medias and be transmitted, and/or be stored according to described program instruction operation
In the working storage of computer equipment.Here, including a device according to one embodiment of present invention, the device includes using
Memory in storage computer program instructions and processor for execute program instructions, wherein, when the computer program refers to
When order is by the computing device, method and/or skill of the plant running based on foregoing multiple embodiments according to the present invention are triggered
Art scheme.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power
Profit requires rather than described above limits, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the present invention.Any reference in claim should not be considered as to the involved claim of limitation.This
Outside, it is clear that the word of " comprising " one is not excluded for other units or step, and odd number is not excluded for plural number.That is stated in system claims is multiple
Unit or device can also be realized by a unit or device by software or hardware.The first, the second grade word is used for table
Show title, and be not offered as any specific order.
Claims (16)
1. a kind of method for assisting multiple projection ends to be projected, wherein the multiple projection end is corresponding with processing unit, its
In, it the described method comprises the following steps:
Processing unit described in a obtains information to be output;
Processing unit described in b determines in the information to be output, with the multiple projection end respectively according to the information to be output
One or more of projection end corresponding part output information respectively;
Processing unit described in c transmits corresponding part output information respectively to one or more of projection ends, for each throwing
Shadow end group is projected in part corresponding with itself output information.
2. the method according to claim 11, wherein, when the information to be output includes view relevant information, the step
Rapid b comprises the following steps:
Determine in the multiple projection end, for projecting one or more projection ends of the information to be output;
Based on predetermined division rule, the information to be output is divided into corresponding respectively with one or more of projection ends
Part output information.
3. the method according to claim 11, wherein, when the information to be output includes one or more scanning informations,
The step b comprises the following steps:
B1 selects at least one scanning information for projection from one or more scanning informations;
B2 is by the multiple projection end, selecting corresponding respectively with every scanning information at least one described scanning information
At least one projection end, the portion using every scanning information as corresponding each projection end at least one projection end
Divide output information.
4. according to the method for claim 3, wherein, every scanning information includes scans content information and scanning position
Confidence ceases, wherein, the step c is further comprising the steps of:
To it is one or more of projection ends transmit corresponding part output information respectively, for it is each projection end group in itself
Scanning position information and scans content information in corresponding part output information are projected.
5. method according to any one of claim 1 to 4, wherein, methods described is further comprising the steps of:
It is determined that the direction of visual lines of the user using the multiple projection end;
Based on the direction of visual lines of the user, it is determined that projected position information corresponding with one or more of projection ends, for
Each projection end projects according to the respective projected position information, pair part corresponding with itself output information.
6. the method according to claim 3 or 4, wherein, methods described is further comprising the steps of:
Determine scanning projection pattern;
Wherein, the step b1 comprises the following steps:
Based on identified scanning projection pattern, by selected in one or more scanning informations for projection at least one of
Scanning information.
7. the method according to claim 3 or 4, wherein, the processing unit is also corresponding with one or more scanning ends,
Wherein, methods described is further comprising the steps of:
Receive the scanning information from one or more scanning ends;
Received one or more scanning informations are sent to another processing unit.
8. method according to any one of claim 1 to 4, wherein, the processing unit and corresponding multiple
Projection end is both contained in a wearable device.
A kind of 9. processing unit for assisting multiple projection ends to be projected, wherein the multiple projection end is relative with processing unit
Should, wherein, the processing unit includes:
Acquisition device, for obtaining information to be output;
Determining device, for according to the information to be output, determining respectively in the information to be output, with the multiple projection end
One or more of projection end corresponding part output information respectively;
Transmitting device, for transmitting corresponding part output information respectively to one or more of projection ends, for each throwing
Shadow end group is projected in part corresponding with itself output information.
10. processing unit according to claim 9, wherein, when the information to be output includes view relevant information, institute
Stating determining device includes:
Sub- determining device, for determining in the multiple projection end, for projecting one or more projections of the information to be output
End;
Device is divided, for based on predetermined division rule, the information to be output to be divided into and one or more of throwings
Part output information corresponding to the difference of shadow end.
11. processing unit according to claim 9, wherein, believe when the information to be output includes one or more scannings
During breath, the determining device includes:
First choice device, for selecting at least one scanning letter for projection from one or more scanning informations
Breath;
Second selection device, for by the multiple projection end, selecting to sweep with the items at least one described scanning information
At least one projection end corresponding to information difference is retouched, is projected using every scanning information as at least one in end accordingly
Each projection end part output information.
12. processing unit according to claim 11, wherein, every scanning information include scans content information and
Scanning position information, wherein, the transmitting device is used for:
To it is one or more of projection ends transmit corresponding part output information respectively, for it is each projection end group in itself
Scanning position information and scans content information in corresponding part output information are projected.
13. the processing unit according to any one of claim 9 to 12, wherein, the processing unit also includes:
Sight determining device, for determining the direction of visual lines of the user using the multiple projection end;
Position determining means, for the direction of visual lines based on the user, it is determined that corresponding with one or more of projection ends
Projected position information, so that each projection end exports according to the respective projected position information, a pair part corresponding with itself
Information is projected.
14. the processing unit according to claim 11 or 12, wherein, handled device also includes:
Pattern determining device, for determining scanning projection pattern;
Wherein, the first choice device is used for:
Based on identified scanning projection pattern, by selected in one or more scanning informations for projection at least one of
Scanning information.
15. the processing unit according to claim 11 or 12, wherein, the processing unit also with one or more scanning ends
It is corresponding, wherein, the processing unit also includes:
Reception device, for receiving the scanning information from one or more scanning ends;
Dispensing device, for received one or more scanning informations to be sent to another processing unit.
16. the processing unit according to any one of claim 9 to 12, wherein, the processing unit and corresponding
Multiple projection ends be both contained in a wearable device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410842843.6A CN104571512B (en) | 2014-12-30 | 2014-12-30 | A kind of method and apparatus for assisting multiple projection ends to be projected |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410842843.6A CN104571512B (en) | 2014-12-30 | 2014-12-30 | A kind of method and apparatus for assisting multiple projection ends to be projected |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104571512A CN104571512A (en) | 2015-04-29 |
CN104571512B true CN104571512B (en) | 2017-11-24 |
Family
ID=53087790
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410842843.6A Expired - Fee Related CN104571512B (en) | 2014-12-30 | 2014-12-30 | A kind of method and apparatus for assisting multiple projection ends to be projected |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104571512B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10571863B2 (en) | 2017-12-21 | 2020-02-25 | International Business Machines Corporation | Determine and project holographic object path and object movement with mult-device collaboration |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5673084A (en) * | 1992-04-17 | 1997-09-30 | Goldstar Co., Ltd. | Movie camera system having view finding and projecting operations and method |
CN1570906A (en) * | 2003-07-14 | 2005-01-26 | 活跃动感科技股份有限公司 | Projection playing system and playing method thereof |
CN103728727A (en) * | 2013-12-19 | 2014-04-16 | 财团法人车辆研究测试中心 | Information display system capable of automatically adjusting visual range and display method of information display system |
CN104166236A (en) * | 2013-05-17 | 2014-11-26 | 许振宇 | Multimedia projection glasses |
-
2014
- 2014-12-30 CN CN201410842843.6A patent/CN104571512B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5673084A (en) * | 1992-04-17 | 1997-09-30 | Goldstar Co., Ltd. | Movie camera system having view finding and projecting operations and method |
CN1570906A (en) * | 2003-07-14 | 2005-01-26 | 活跃动感科技股份有限公司 | Projection playing system and playing method thereof |
CN104166236A (en) * | 2013-05-17 | 2014-11-26 | 许振宇 | Multimedia projection glasses |
CN103728727A (en) * | 2013-12-19 | 2014-04-16 | 财团法人车辆研究测试中心 | Information display system capable of automatically adjusting visual range and display method of information display system |
Also Published As
Publication number | Publication date |
---|---|
CN104571512A (en) | 2015-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9369638B2 (en) | Methods for extracting objects from digital images and for performing color change on the object | |
US8976160B2 (en) | User interface and authentication for a virtual mirror | |
Goesele et al. | Ambient point clouds for view interpolation | |
US8970569B2 (en) | Devices, systems and methods of virtualizing a mirror | |
AU2014304760B2 (en) | Devices, systems and methods of virtualizing a mirror | |
GB2564745B (en) | Methods for generating a 3D garment image, and related devices, systems and computer program products | |
CN110288692B (en) | Illumination rendering method and device, storage medium and electronic device | |
CN103871106A (en) | Method of fitting virtual item using human body model and system for providing fitting service of virtual item | |
US8526734B2 (en) | Three-dimensional background removal for vision system | |
CN104735435B (en) | Image processing method and electronic device | |
US20150269759A1 (en) | Image processing apparatus, image processing system, and image processing method | |
KR101556158B1 (en) | The social service system based on real image using smart fitting apparatus | |
CN107850934A (en) | Virtual/augmented reality system with dynamic area resolution ratio | |
JP2018180654A (en) | Information processing device, image generation method, and program | |
KR20200138349A (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN107469355A (en) | Game image creation method and device, terminal device | |
CN107016730A (en) | The device that a kind of virtual reality is merged with real scene | |
KR20140130638A (en) | The smart fitting apparatus and method based real image | |
US20190340773A1 (en) | Method and apparatus for a synchronous motion of a human body model | |
CN106981100A (en) | The device that a kind of virtual reality is merged with real scene | |
CN104571512B (en) | A kind of method and apparatus for assisting multiple projection ends to be projected | |
JP6775669B2 (en) | Information processing device | |
KR102287939B1 (en) | Apparatus and method for rendering 3dimensional image using video | |
JP7479793B2 (en) | Image processing device, system for generating virtual viewpoint video, and method and program for controlling the image processing device | |
JP2018198025A (en) | Image processing device, image processing device control method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210108 Address after: 310052 room 508, 5th floor, building 4, No. 699 Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Patentee after: Alibaba (China) Co.,Ltd. Address before: 100080 room 701-52, 7th floor, 2 Haidian East 3rd Street, Haidian District, Beijing Patentee before: ZHUOYI CHANGXIANG (BEIJING) TECHNOLOGY Co.,Ltd. |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171124 |
|
CF01 | Termination of patent right due to non-payment of annual fee |