CN105190487B - Method and apparatus for preventing from conflicting between main body - Google Patents
Method and apparatus for preventing from conflicting between main body Download PDFInfo
- Publication number
- CN105190487B CN105190487B CN201580000721.5A CN201580000721A CN105190487B CN 105190487 B CN105190487 B CN 105190487B CN 201580000721 A CN201580000721 A CN 201580000721A CN 105190487 B CN105190487 B CN 105190487B
- Authority
- CN
- China
- Prior art keywords
- user
- image
- main body
- area
- activities
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
A kind of equipment for performing computer game is provided.The equipment includes:Output unit, it is configured as sending the first image and the second image to display device, wherein, the first image be based on participate in the computer game the first user form and generate, the second image be based on participate in the computer game second user form and generate;And control unit, the possibility to conflict between the first user of prediction and second user is configured as, and according to the prediction as a result, warning message of the control by the output unit to the possibility of the display device transmission instruction conflict.
Description
Technical field
This application involves a kind of method and apparatus for preventing from conflicting between main body.
Background technology
Such computer game is developed:The computer game is the hand based on the user for participating in the computer game
Gesture and be performed.Computer game that whether task is completed is determined according to the activity of user wherein for example, having developed.
The content of the invention
Technical problem
When being carrying out computer game, according to the activity of user, the user may be with being located proximate to the another of the user
One user or the things conflict for being arranged to close the user.
Technical solution
Exploitation is needed to be used for the method and apparatus for preventing that the user for participating in computer game from conflicting with another user or things.
Beneficial effect
Device 100 or for perform content equipment 101 can the shape information based on each main body and determine include each master
The scope of activities of the attainable point of body, and predict whether these main bodys can collide with each other.Therefore, device 100 or interior for performing
The equipment 101 of appearance can prevent from conflicting between main body in advance.In addition, if can be collided with each other between prediction main body, then device 100 or it is used for
Warning message or pause execution content can be generated by performing the equipment 101 of content.
Moreover it is possible to by medium (such as computer-readable medium)/on computer readable code/instructions come real
Existing other embodiments, to control at least one treatment element to realize any of the above described embodiments.The medium may correspond to allow to deposit
Store up and/or transmit any medium/medium of the computer-readable code.
Brief description of the drawings
From the description made below in conjunction with attached drawing, the above and other aspects of some embodiments of the disclosure, feature and
Advantage will will become more apparent that, wherein:
Figure 1A and Figure 1B is illustrated that the method for preventing from conflicting between multiple main bodys of display in accordance with an embodiment of the present disclosure
The concept map of example;
Fig. 1 C be illustrated that show in accordance with an embodiment of the present disclosure be used for perform content equipment and be used for output image
With the equipment of sound independently of each other and the configuration diagram of example deposited;
Fig. 1 D are illustrated that the example for illustrating the operation content performed by server in accordance with an embodiment of the present disclosure
Schematic diagram;
Fig. 2 is illustrated that the stream of the example of the method for preventing from conflicting between multiple main bodys of display in accordance with an embodiment of the present disclosure
Cheng Tu;
Fig. 3 is illustrated that the example for illustrating the shape information that main body is obtained by device in accordance with an embodiment of the present disclosure
Schematic diagram;
Fig. 4 A and Fig. 4 B are illustrated that the acquisition user's performed by device for explanation in accordance with an embodiment of the present disclosure
The schematic diagram of the example of shape information;
Fig. 5 is illustrated that the shape information that user is exported on the screen of device of display in accordance with an embodiment of the present disclosure
The schematic diagram of example;
Fig. 6 A and Fig. 6 B are illustrated that the addition object performed by device for explanation in accordance with an embodiment of the present disclosure
The schematic diagram of example;
Fig. 7 A and Fig. 7 B are illustrated that the deletion object performed by device for explanation in accordance with an embodiment of the present disclosure
The schematic diagram of example;
Fig. 8 is illustrated that the example of the scope of activities of the screen output main body to device of display in accordance with an embodiment of the present disclosure
The schematic diagram of son;
Fig. 9 A, Fig. 9 B and Fig. 9 C be illustrated that for illustrate in accordance with an embodiment of the present disclosure by device perform based on
The configuration information of family input determines the schematic diagram of the example of the scope of activities of main body;
Figure 10 A and 10B are illustrated that the definite main body performed by device for explanation in accordance with an embodiment of the present disclosure
The schematic diagram of the example of scope of activities;
Figure 11 is illustrated that the shape for illustrating the multiple main bodys of acquisition performed by device in accordance with an embodiment of the present disclosure
The schematic diagram of the example of state information;
Figure 12 is illustrated that the screen to device of display in accordance with an embodiment of the present disclosure exports each in multiple users
The schematic diagram of the shape information of user and the example of scope of activities;
Figure 13 A are illustrated that the example that multiple objects are exported on the screen of device of display in accordance with an embodiment of the present disclosure
Schematic diagram;
Figure 13 B are illustrated that the device of display in accordance with an embodiment of the present disclosure does not perform the schematic diagram of the example of content;
Figure 14 A and Figure 14 B are illustrated that for illustrating that the acquisition performed by device in accordance with an embodiment of the present disclosure is multiple
The shape information of main body and the schematic diagram for determining the example of the scope of activities of the multiple main body;
Figure 15 A and Figure 15 B are illustrated that for illustrating that the acquisition performed by device in accordance with an embodiment of the present disclosure is multiple
The shape information of main body and the schematic diagram for determining the example of the scope of activities of the multiple main body;
Figure 16 A and Figure 16 B are illustrated that for illustrating that the acquisition performed by device in accordance with an embodiment of the present disclosure is multiple
The shape information of main body and the schematic diagram for determining the example of the scope of activities of the multiple main body;
Figure 17 be illustrated that show in accordance with an embodiment of the present disclosure by device perform acquisition main body shape information simultaneously
Determine the flow chart of the example of the scope of activities of the main body;
Figure 18 is illustrated that for illustrating that the form based on main body performed by device in accordance with an embodiment of the present disclosure is believed
Breath and mobile route determine the schematic diagram of the example of the scope of activities of the main body;
Figure 19 be illustrated that for illustrate in accordance with an embodiment of the present disclosure by the first main body of prediction that device performs and the
Two main bodys whether can conflicting example flow chart;
Figure 20 A, Figure 20 B and Figure 20 C be for illustrate in accordance with an embodiment of the present disclosure by device perform by between main body
Example of the beeline compared with predetermined distance value schematic diagram;
Figure 21 A, Figure 21 B and Figure 21 C be illustrated that display in accordance with an embodiment of the present disclosure determine that main body can phase in device
The schematic diagram of the example of the image of the screen of the device is mutually output in the case of conflict;
Figure 21 D are illustrated that the recovery performed after pause performs content by device of display in accordance with an embodiment of the present disclosure
Perform the schematic diagram of the example of content;
Figure 22 be illustrated that for illustrate in accordance with an embodiment of the present disclosure by device perform by intersubjective most short distance
From the schematic diagram of the example compared with predetermined distance value;
Figure 23 A, Figure 23 B and Figure 23 C be illustrated that display in accordance with an embodiment of the present disclosure determine that user understands phase in device
The schematic diagram of the example of the image of the screen of the device is mutually output in the case of conflict;
Figure 24 is illustrated that for illustrating the setting place of safety performed by device or danger in accordance with an embodiment of the present disclosure
The schematic diagram of the example in area;
Figure 25 be illustrated that for illustrate in accordance with an embodiment of the present disclosure enter danger beyond place of safety or main body in main body
The schematic diagram of the example of the output warning message performed in the case of danger zone by device;
Figure 26 is illustrated that the example for illustrating the specified peril performed by device in accordance with an embodiment of the present disclosure
The schematic diagram of son;
Figure 27 be illustrated that for illustrate in accordance with an embodiment of the present disclosure in the case where main body is close to peril by
The schematic diagram of the example for the output warning message that device performs;
Figure 28 be illustrated that for illustrate in accordance with an embodiment of the present disclosure from device perform to another device send warn
Accuse the schematic diagram of the example of information;
Figure 29 is the block diagram of the example of device in accordance with an embodiment of the present disclosure;
Figure 30 is the block diagram of the example of device in accordance with an embodiment of the present disclosure;And
Figure 31 is the block diagram for being used to perform the example of the system of content in accordance with an embodiment of the present disclosure.
Through attached drawing, it should be noted that same drawing reference numeral be used to describe the same or similar element, feature and structure.
Embodiment
Optimal mode
The each side of the disclosure will solve at least the above and/or shortcoming and provide at least following advantages.Therefore, this public affairs
The one side opened is to provide the method and apparatus for preventing from conflicting between main body.
Another aspect of the present disclosure is to provide non-transitory computer-readable recording medium storage, is stored thereon with calculating
Machine program, the computer program perform the method when being computer-executed.
Additional aspect will be partly set forth in the description that follows, partly will be obvious from the description, or
It can be learned by the practice of the various embodiments proposed.
According to one aspect of the disclosure, there is provided a kind of equipment for performing computer game.The equipment includes:Output
Unit, is configured as sending the first image and the second image to display device, wherein, the first image is based on the participation computer
Game the first user form and generate, the second image be based on participate in the computer game second user form and
Generation;And control unit, the possibility to conflict between the first user of prediction and second user is configured as, and it is pre- according to this
It is surveying as a result, control sends the warning letter of the possibility of the instruction conflict by the output unit to the display device
Breath.
The control unit can determine that first area, and first area includes the activity first in certain area with the first user
A part of attainable solstics of user;And the control unit can determine that second area, second area is included with second
A part of attainable solstics of user's activity second user in certain area.
If first area is mutually overlapping with second area, which can determine that there are the first user and second to use
The possibility to conflict between family.
The mobile route of the first user and the mobile route of second user can be predicted in the control unit, and by further contemplating
First user of prediction and the mobile route of second user determine first area and second area.
The mobile route can be the details based on the computer game and be predicted.
First image and the second image can include the image generated by external camera.
The warning message may include the image from external display device output or the sound from external display device output.
If it is determined that there is a possibility that to conflict between the first user and second user, then the control unit can suspend execution
The computer game.
According to another aspect of the disclosure, there is provided a kind of method for performing computer game.This method includes:Based on ginseng
The first image is generated with the form of the first user of the computer game;Based on the second user for participating in the computer game
Form generates the second image;The first image and the second image are sent to display device;Predict the first user and second user it
Between the possibility that conflicts;And based on prediction as a result, sending the police for the possibility for indicating the conflict to the display device
Accuse information.
This method may also include:Determine first area, wherein, first area is included as the first user is in certain area
A part of attainable solstics of movable first user;And definite second area, wherein, second area is included with second
A part of attainable solstics of user's activity second user in certain area.
The step of generating warning message may include:If first area is mutually overlapping with second area, the police is generated
Accuse information.
This method may also include the mobile route of the first user of prediction, wherein it is determined that the step of first area includes passing through
Predicted mobile route is further contemplated to determine first area.
The mobile route can be the details based on the computer game and be predicted.
First image and the second image can include the image generated by external camera.
The warning message may include the image from external display device output or the sound from external display device output.
This method may also include:If it is determined that there is a possibility that to conflict between the first user and second user, then suspend
Perform the computer game.
According to one aspect of the disclosure, there is provided a kind of equipment for performing computer game.The equipment includes:Output
Unit, is configured as sending the first image and the second image to display device, wherein, the first image is based on the participation computer
The form of the user of game and generate, the second image is the form based at least one main body for being located proximate to the user and gives birth to
Into;And control unit, the possibility to conflict between prediction the user and main body is configured as, and according to the knot of the prediction
Fruit, control send the warning message for the possibility for indicating the conflict by the output unit to the display device.
According to one aspect of the disclosure, there is provided a kind of equipment for performing computer game.The equipment includes:Output
Unit, is configured as sending the first image and the second image to display device, wherein, the first image is based on the participation computer
The form of the user of game and generate, the second image is the form based at least one main body for being located proximate to the user and gives birth to
Into;And control unit, it is configured as based on external input information to set hazardous area, and this is determined in the control unit
When user enters the hazardous area, control sends instruction the user to the display device by the output unit and enters hazardous area
Possibility warning message.
According to one aspect of the disclosure, there is provided a kind of equipment for performing computer game.The equipment includes:Input
Unit, is configured as receiving the input of picture signal;And control unit, it is configured as:Control sets corresponding with the first object
First area and second area corresponding with the second object, wherein, the first object and the second object are believed included in described image
In number;Measure the distance between first area and second area;If the value of measured distance is less than predetermined value, output is pre-
Determine message.
First area can include with the first main body in certain place movable first main body it is a part of it is attainable most
Far point, wherein, the first main body corresponds to the first object;Second area can include the activity the in certain place with the second main body
A part of attainable solstics of two main bodys, wherein, the second main body corresponds to the second object.
The mobile route of the first main body and the mobile route of the second main body can be predicted in the control unit, and by further contemplating
The mobile route of prediction determines first area and second area.
The mobile route can be the details based on the computer game and be predicted.
If the value of measured distance is less than predetermined value, which, which can suspend, performs the computer game.
It is the other side of the disclosure, excellent from the detailed description of various embodiments that the disclosure is disclosed below in conjunction with attached drawing
Point and prominent features will be made apparent to those skilled in the art.
Embodiment
Being described below to help the disclosure that comprehensive understanding is limited by claim and its equivalent referring to the drawings is provided
Various embodiments.Although being described below including various specific details to help the understanding, these specific details will be by
Think what is be merely exemplary.Therefore, it will be appreciated by those of ordinary skill in the art that not departing from the scope of the present disclosure and spirit
In the case of, the various changes and modifications of various embodiments described here can be made.In addition, for clarity and conciseness, it is known
Function and construction description can be omitted.
The term and vocabulary used in the following description and claims is not limited to document implication, and is only used for by inventor
Realize the clear and consistent understanding to the disclosure.Therefore, it is noted that to those skilled in the art:This public affairs is provided
Being described below for the various embodiments opened is only used for illustrating purpose and being not intended to limitation by the following claims and their equivalents
The purpose of the disclosure of restriction.
It will be understood that:Unless the context clearly indicates otherwise, otherwise singulative includes plural indicant.Therefore, example
Such as, quoting " assembly surface " includes quoting surface as one or more.
Term used herein will be briefly described, and inventive concept will be described in more detail below.
General and widely used term is used herein in view of the function of being provided in inventive concept, these terms can
Changed according to the appearance of the intention of those of ordinary skill in the art, precedent or new technology.In addition, in some cases, application
People can arbitrarily select particular term.So, applicant will provide the implication of these terms in the description to inventive concept.Cause
This, it should be appreciated that:Term used herein should be understood with consistent with their implications in the linguistic context of correlation technique
Implication and will not idealization or it is undue formal in the sense that it is interpreted, unless so clearly being limited herein.
It will also be appreciated that:When used herein, term " comprising " and/or " having " illustrate there are component, but do not arrange
Except existing or adding one or more of the other component, unless otherwise indicated.In addition, term used herein such as " unit " or
Expressions such as " modules " is used for the entity for handling at least one functions or operations.These entities can by hardware, software or hardware with it is soft
The combination of part is realized.
" device " used herein refers to included in some equipment and reaches the element of certain purpose.In more detail, certain
A equipment can be without limitation included in the embodiment of inventive concept, which includes the screen of executable display and for connecing
The interface of information input by user is received, receives the input of user, thus reaches certain purpose.
Inventive concept is described more fully with now with reference to attached drawing, the various implementations of inventive concept are shown in the drawings
Example.However, the design that can carry out an invention in many different forms, and inventive concept should not be considered limited to explain herein
The embodiment stated.In the description to inventive concept, when thinking that some explain in detail to correlation technique can unnecessarily obscure
During the essence of inventive concept, described explain in detail can be omitted.Through the description of the drawings, same drawing reference numeral represents same
Element.
Figure 1A and Figure 1B is illustrated that the method for preventing from conflicting between multiple main bodys of display in accordance with an embodiment of the present disclosure
The concept map of example.
Referring to Figure 1A and Figure 1B, 110 and of persons or things 120 and 121 and object positioned at the front of device 100 is shown
111, wherein, using comprising camera shooting persons or things 120 and 121 in the device 100 and be output to the screen of device 100
When obtain object 110 and 111.Hereinafter, object 110 and 111 refers respectively to display output to the screen of device 100
Persons or things 120 and 121 image 110 and 111.In addition, hereinafter, main body 120 or 121 refer to persons or things 120 or
121.In other words, camera shooting main body 120 or 121, the object 110 or 111 as the image of main body 120 or 121 are output
To the screen of device 100.For ease of description, the object 110 and 111 described in Figure 1A and Figure 1B is in being performed by device 100
The image of the user of appearance, but object 110 and 111 is not limited to this.
Main body can be the user for participating in content or the persons or things for being not involved in content.Object 110 can pass through bat
Take the photograph the image that is obtained using the people of content or by shooting without using the people of content the image that obtains.In addition, object 110
Can be by shooting people's things possessed or the things placed in the space that people is located at the image that obtains.Here,
Things may correspond to animal, the furniture of plant or arrangement in space.Content, which refers to, can be controlled to the movable of identification user
Program.For example, content can be the computer game (such as dancing and game, motor play that perform when user does certain activity
Deng), or the activity of user is output to the program of the screen of device 100.
If it is assumed that performing computer game, both objects 110 and 111 can all be the user for participating in the computer game
Image.Alternatively, one in object 110 and 111 can be participate in the computer game user image, another can be
It is not involved in the image of the people of the computer game or can is the image of things.As an example, if it is assumed that holding
Row dancing and game, object 110 and 111 can be the image for the user for enjoying the dancing and game together respectively.As another example,
One in object 110 and object 111 can be enjoy the dancing and game user image, another can be proximate to the use
Family and the image for watching the people of the dancing and game of the user's object for appreciation.As another example, one in object 110 and 111 can be with
It is the image of user, another can be image by the user bystander or animal or be placed as close to the user
Things image.For example, when being carrying out dancing and game, it is more it is personal in some can be arranged to participate in dancing trip
The people (i.e. user) of play, other people can be arranged to be not involved in people's (i.e. non-user) of the dancing and game.
Hereinafter, the image of human or animal is known as dynamic object, it is impossible to the things or plant of autonomic activities or walking
Image is known as static object.
As an example, shown object may include the first user and the figure of second user for participating in computer game
Picture.As another example, object may include the image for participating in the user of computer game and the people for being not involved in computer game
Image.As another example, object may include the image for participating in the user of computer game and the animal close to the user
Image.As another example, object may include to participate in the image of the user of computer game and be placed as close to the use
The image of the things (such as furniture) at family.
Object 110 and 111 is including participating in the example of the first user of computer game and the image of second user later
It is described by with reference to Figure 11.Object 110 and 111 includes participating in the image of the user of computer game and is not involved in computer game
The example of image of people be described by later in reference to Figure 14.In addition, object 110 and 111 includes participating in computer trip respectively
The example of the image of the animal of the image of the user of play and close the user is described by later in reference to Figure 15.110 He of object
The example of the image of 111 things for including participating in the image of the user of computer game and being placed as close the user respectively exists
It is described by below with reference to Figure 16.
In addition, object can be virtual role set by the user.For example, user can generate reality by set content
On not existing virtual role as object.
Referring to Figure 1A, user 120 and 121 their spaced predetermined distances or it is farther in the case of use content.
For example, if it is assumed that content is dancing and game, then since user 120 and 121 is spaced from each other so as to which they do not conflict with each other, use
Family 120 and 121 can safely do certain activity.
Referring to Figure 1B, since user 120 and 121 is (i.e. within predetermined distance) close to each other, if 120 He of user
At least one user does certain activity in 121, then other users 120 and 121 may conflict with least one user.
Conflict described herein refers to the physical contact between user 120 and 121.Alternatively, conflict refers to user 120 and is leaned on position
Another people, animal, plant or the furniture contact of nearly user 120.In other words, conflict refers to the part and things 121 of user 120
A part contact.As an example, if the part (such as his/her head, arm, trunk or leg) of a user connects
Contact the part of another user, be then interpreted as the two users and collide with each other.As another example, if a user
A part (such as his/her head, arm, trunk or leg) touch desk, then be interpreted as the user and collided with each other with desk.
If main body collides with each other, human or animal corresponding with one of these main bodys may be injured, or with it is described
The corresponding things of main body may be broken or be damaged.Therefore, the possibility to conflict between main body can be predicted in device 100.If really
The possibility for determining to conflict between main body is high, then certain the exportable warning message of device 100.The warning message can be from device 100
Screen output light, color or certain image, or from include in the device 100 loudspeaker output sound.In addition, such as
Fruit device 100 is carrying out content, then the example of information, device 100 can stop or suspend execution content by way of caution.
According to the warning message exported by device 100, human or animal corresponding with object can stop action, thus can prevent
Conflict between object.
Show that device 100 performs content (such as computer game) and output image and/or sound in Figure 1A and Figure 1B,
But perform content and output image and sound not limited to this.In addition, camera can be and 100 separated equipment of device, Huo Zheke
In device 100.In addition, can be separated from each other with the equipment for output image and sound for the equipment for performing content
And deposit.
Fig. 1 C be illustrated that show in accordance with an embodiment of the present disclosure be used for perform content equipment and be used for output image
With the equipment of sound independently of each other and the configuration diagram of example deposited.
Referring to Fig. 1 C, system 1 includes being used for equipment 101, display device 102 and the camera 103 for performing content.If it is assumed that
The content is computer game, then the equipment 101 for being used to perform content refers to game console.
The capture of camera 103 participates in the user of computer game or is located proximate to the figure of at least one object of the user
Picture, and the image captured is sent to the equipment 101 for performing content.The image captured refer to display the user or this extremely
The image of the form of a few object.
The image sent from camera 103 is sent to display device 102 by the equipment 101 for performing content.In addition, such as
Fruit determines there is a possibility that to conflict between main body, and the equipment 101 for performing content generates possibility of the instruction there are the conflict
The warning message of property.In addition, the warning message is sent to display device 102 for the equipment 101 for performing content.As one
Example, object 112 may include the image of the first user 120 and the image of second user 121.As another example, object 112
It may include the image for participating in the user 120 of computer game and the people 121 for being not involved in computer game.As another example,
Object 112 may include to participate in the image of the user 120 of computer game and be located proximate to the image of the animal 121 of user 120.Make
For another example, object 112 may include the user 120 for participating in computer game and the things 121 for being located proximate to user 120
(such as furniture).
Display device 102 is exported from the described image or warning message sent for performing the equipment 101 of content.The police
It can be light, color or certain image exported from the screen of display device 102 to accuse information, or from included in display device
Sound of loudspeaker output in 102, etc..
As explained above with described in Figure 1A to Fig. 1 C, content is performed by device 100 or the equipment 101 for performing content.So
And the execution not limited to this of content.In other words, content can be performed by server, and device 100 or display device 102 are exportable
The execution screen of the content.
Fig. 1 D are illustrated that the signal for illustrating the example that content is performed by server in accordance with an embodiment of the present disclosure
Figure.
Referring to Fig. 1 D, server 130 can pass through network connection to device 104.In 122 request server 130 of user performs
Hold.For example, user 122 can sign in server 130 by device 104, and select the content that is stored in server 130 so as to
Perform the content.
When performing content, image to be output is sent to device 104 by server 130.For example, if content is to calculate
Machine game, then server 130 can by the initial setting up screen of the computer game or perform screen be sent to device 104.
The image captured by camera 105 is sent to server 130 by device 104 or camera 105.Captured by camera 105
Image (image for including object 113 and 114) is output to the screen of device 104.Device 104 can be combined and export in described
The execution screen of appearance and the image captured by camera 105.If for example, the content is dancing and game, device 104 will can be shown
Show that the movable image that requirement user 122 does exports together with the image obtained by shooting user 122.
The main body 123 shown in Fig. 1 D can be people, animal, plant or things (such as furniture).In other words, main body
123 can be another user that the content is enjoyed together with user 122, or the people close to user 122.Alternatively, main body
123 can be the animal, plant or things for being located proximate to user 122.
When being carrying out content, if prediction main body 122 and 123 can collide with each other, server 130 or device 104 can
Generate caution signal.
As an example, the possibility to conflict between main body 122 and 123 can be predicted by the server 130.It is if pre-
Surveying main body 122 and 123 can collide with each other, and server 130 can notify device 104 that there are the possibility to conflict between main body 122 and 123
Property, 104 exportable caution signal of device.
As another example, the possibility to conflict between main body 122 and 123 can be predicted by device 104.In other words,
Server 130 can only carry out content, and the possibility to conflict between main body 122 and 123 and output police can be predicted by device 104
Accuse signal.
Hereinafter, describe to prevent main body by what device (such as device 100, device 104 etc.) performed with reference to Fig. 2 to Figure 27
Between the example that conflicts.
Fig. 2 is illustrated that the stream of the example of the method for preventing from conflicting between multiple main bodys of display in accordance with an embodiment of the present disclosure
Cheng Tu.
Referring to Fig. 2, the method for preventing from conflicting between multiple main bodys includes multiple operations, these operations are by as shown in Figure 29
Device 100 or being used for as shown in Figure 31 perform the equipment 101 of content and chronologically handle.It should therefore be understood that on Figure 29
Shown in device 100 or Figure 31 shown in be used for the description to be provided of equipment 101 of content be provided can also be applied to reference
Method described in Fig. 2, even if it is also such no longer to provide these descriptions.
In operation 210, device 100 obtains the first object of the form for representing the first main body and represents the form of the second main body
The second object.Form described herein is the shape of main body, including the shape of the length of main body and volume and main body.As
One example, if it is assumed that object is the image of people, which includes the full detail of the shape of instruction the people, such as the people's
Head-to-toe global shape, height, leg length, trunk thickness, arm thickness, leg thickness etc..As another example, such as
Fruit assumes that object is the image of chair, then the object includes the full detail for the shape for indicating the chair, the shape of such as chair
Shape, height, leg thickness etc..
As an example, if it is assumed that device 100 performs content, then the first object refers to one using the content
The image of user, and the second object refers to the image of another user using the content or the main body without using the content
Image.If the second object is without using the image of the main body of content, the second object refer to either dynamic object or
Static object.The image of human or animal is known as dynamic object, it is impossible to which the image of the things or plant of autonomic activities or walking is known as
Static object.Content described herein refers to requiring the movable program of user.For example, performed based on the activity of user
Game may correspond to the content.
As another example, if it is assumed that do not perform content, then the first object and the second object refer respectively to or
Dynamic object or static object.It described above is the implication of static object and dynamic object.For example, if it is assumed that device 100
The position close to crossing is installed in, then the pedestrian through crossing walking or the vehicle that is travelled on driveway
Image may correspond to dynamic object, and the image for being located proximate to the barrier on pavement may correspond to static object.
Be described below the first object and the second object be respectively by shoot single main body (i.e. people, animal, plant or
Things) and the image of acquisition, but the first object and the second object not limited to this.In other words, the first object or the second object can
To be by the way that multiple objects to be shot to the image to obtain together.
Device 100 can pass through the first image of image acquisition captured by camera and the second image.Based on the first image and
Two images, device 100 can not only be obtained to be had with the actual form of the main body (i.e. people, animal, plant or things) corresponding to object
The information (hereinafter referred to as shape information) of pass, and can obtain on the distance between the main body and camera information and
Information on the distance between multiple main bodys.In addition, according to the type of the camera, device 100 can be obtained on the master
The information of the color of body and background.
For example, the camera can be depth camera.Depth camera refers to not only including the mesh to be shot for generating
Target form and comprising on space three-dimensional (3D) information (in other words, on the target to be shot and camera it
Between distance information or information on the distance between the target to be shot) image camera.As an example,
Depth camera can refer to stereoscopic camera, which is used to capture by using two cameras by being in position different from each other
Image, generation comprising space 3D information image.As another example, depth camera may refer to a kind of such phase
Machine, the camera are used for the pattern that the light of the camera is reflected back by using directive space and by the things in the space, generation bag
The image of 3D information containing the space.As another example, depth camera can be a kind of such camera, which is used for
The corresponding quantity of electric charge of light of the camera is reflected back based on the space for including object with directive and by being present in the things in the space,
The image of 3D information of the generation comprising the space.However, the camera not limited to this, can correspond to without limitation can capture bag
Containing the form and any camera of the image of the information in space on object.
In addition, device 100 can be based on the data of storage in storage unit (such as the storage unit 2940 shown in Figure 30),
Obtain the shape information of the main body corresponding to main body.In other words, the shape information for the main body that can be acquired in advance is storable in
In storage unit.It that case, the shape information of storage in the memory unit can be read in device 100.
The description provided with reference to Fig. 3 to 16 be may correspond into the operation to be carried out before content is performed.It is if for example, false
If content is computer game, then below with regard to Fig. 3 to 16 provide description may correspond to start the computer game it
It is preceding by operation to be performed.
Hereinafter, the example of the acquisition shape information performed with reference to Fig. 3 descriptions by device (such as device 100).
Fig. 3 is illustrated that the example for illustrating the acquisition shape information performed by device in accordance with an embodiment of the present disclosure
Schematic diagram.
Referring to Fig. 3, user 310, device 100 and camera 320 are shown.Hereinafter, for ease of description, device is described
100 include the screen for being used for showing image, and camera 320 and device 100 are the devices being separated from each other.However, camera 320 and dress
Put 100 not limited to this.In other words, camera 320 can be included in device 100.In addition, it is a kind of camera to describe camera 320,
The camera is used for the space for including object by using directive and the light of the camera is reflected back by the object in the space and things
And generate image.However, as described with reference to Figure 2,320 not limited to this of camera.
If the screen and touch pad of device 100 form hierarchy to form touch-screen, which both can be used as defeated
Go out unit and also be used as input unit.The screen may include liquid crystal display (LCD), Thin Film Transistor-LCD (TFT-
LCD it is), at least one in Organic Light Emitting Diode (OLED), flexible display, 3D display device and electrophoretic display device (EPD).According to the screen
The implementation type of curtain, device 100 can include two or more screens.Described two or more screens can be by using hinge by cloth
It is set to and faces each other.
Camera 320 obtains the light reflected by user 310 to the spatial emission light comprising user 310.Then, camera 320
By using data of the acquired photogenerated on the form of user 310.
Camera 320 is by the data sending of the form of user 310, to device 100, device 100 is by using transmitted
The shape information of data acquisition user 310.Then, the object of the shape information comprising user 310 is output to device by device 100
100 screen.In this way, the shape information of user 310 can be output to the screen of device 100.In addition, device 100 can also pass through
Using the data sent from camera 320, the information on the distance between camera 320 and user 310 is obtained.
Hereinafter, with reference to Fig. 4 describe by device 100 perform by using the data acquisition sent from camera 320
The example of the shape information of user 310.
Fig. 4 A and 4B are illustrated that for illustrating the shape information that user is obtained by device in accordance with an embodiment of the present disclosure
Example schematic diagram.
Referring to Fig. 4 A and 4B, respectively illustrate from the examples of the data for sending the extracting data from camera 320 and pass through
The example of the form 410 for the user for using extracted data to estimate.In one embodiment, carried out by device 100 described
Estimation.
Device 100 is from the region sent from the extracting data preset range of camera 320.What the region of the preset range referred to
It is the region residing for user.In other words, camera 320 is to spatial emission light, then, if the light launched is from being present in the sky
Between in things (including user) reflected and be reflected back camera 320, then camera 320 is by using being reflected back toward camera 320
Light, calculates depth value corresponding with each pixel.The depth value calculated is represented by the brightness of point corresponding with pixel.Change sentence
Talk about, if by the light that camera 320 is launched from being reflected close to the position of camera 320 and being reflected back camera 320, with the position
Corresponding dim spot can be shown.If reflected by the light that camera 320 is launched from the position away from camera 320 and return to camera
320, it can be shown with the bright spot of the position correspondence.Therefore, device 100 can be by using the data (example sent from camera 320
Point such as corresponding with each pixel), determine the form of the things (including user) in the space of the smooth directive and be somebody's turn to do
The distance between things and camera 320.
Device 100 can be from the extracting data data corresponding with the region residing for user sent from camera 320, and pass through
Noise is eliminated from the data extracted and obtains the information of the form on user.In addition, device 100 can by will by from
Eliminate noise in the data extracted and the data that obtain compared with the various postures of the people stored in storage unit 2940,
Estimation represents the skeleton of the form of user.In addition, device 100 can estimate the form of user by using estimated skeleton
410, and obtain by using the form 410 of estimated user the shape information of user.
Hereinafter, the example of the shape information of the user obtained with reference to Fig. 5 description outputs by device 100.
Fig. 5 is the example for being illustrated that the shape information that user is exported on the screen of device in accordance with an embodiment of the present disclosure
The schematic diagram of son.
Referring to Fig. 5, the shape information of user can be output to the screen 510 of device 100.For example, the height 520 of user,
Arm length 530, leg length 540 can be output to screen 510.In addition, the information on the gender 550 of user can be output to
Screen 510.The gender 550 of user can determine by device 100 by analyzing the data sent from camera 320, or can be by
User directly inputs.
In addition, object 560 corresponding with user can be output to screen 510.Object 560 can be with being obtained by camera 320
The corresponding form of data taken, or can be by the virtual form of the shape information generation of user.For example, object 560 can be with
It is the form shot by camera 320, or can is by combining height 520, arm length 530, leg length 540 and gender
550 and generate virtual form.In addition, the form of object 560 can be determined based on the information directly inputted by user.For example,
Object 560 can be the game role in order to reflect the shape information of user in object 560 and generate.
The icon 570 for asking the user whether the shape information that can store user can be shown on screen 510.If body
At least one in high by 520, arm length 530, leg length 540, gender 550 and object 560 should not be stored (such as needs to repair
Change), then user selects the icon of instruction "No".Then, device 100 can obtain the shape information of user again, and camera 320 can quilt
Re-operate.If user selects the icon of instruction "Yes", the shape information of user is stored in device 100
In storage unit 2940.
As explained above with described in Fig. 3 to Fig. 5, device 100 can identify user by using the data sent from camera,
And obtain the shape information of identified user.Device 100 may be based on information input by user, adds persons or things or deletes
Except the main body captured by camera 320.In other words, user can add virtual subject or be not included in the number sent by camera 320
Main body in.In addition, user can delete the object for being shot and being shown on the screen of device 100 by camera 320.Below
In, display object is added on the screen by device 100 with reference to Fig. 6 A to Fig. 7 B descriptions or deletes the example of shown object.
Fig. 6 A and Fig. 6 B are the schematic diagrames for illustrating the example by device addition object in accordance with an embodiment of the present disclosure.
Referring to Fig. 6 A, the object 610 of user is illustrated that on the screen of device 100.Assuming that the object shown in Fig. 6 A
610 be the image of the user of content.
The data sent by camera 320 may not include on the full detail in shooting space.In other words, according to all
Such as the influence of the performance or surrounding environment etc of camera 320, camera 320 may not generate comprising with shooting space
The data of people, animal, the plant full detail related with things.Virtual object can be arbitrarily set (to represent virtual subject with 3
Image), device 100 can obtain the shape information on set virtual object.
As an example, dog is there are in fact even in the position close to user, the data generated by camera 320 also may be used
The information of the form on this dog can not included.Then, user can by input unit in device 100 (such as
Input unit 2910) input dog form 620, device 100 can based on the form 620 of the dog inputted export represent dog visitor
Body.In this case, device 100 can be by using the ratio between the object 620 and the object 610 of expression user for representing dog
Example, estimates the shape information (such as size or leg length of dog) of dog.
As another example, even in not actually existing chair close to the position of user, user also can by comprising
Input unit 2910 in device 100 inputs the form 630 of the chair.In addition, device 100 can be based on the chair inputted
Form 630 to screen export represent the chair object.Device 100 can be by using the object 630 and table for representing the chair
Show the ratio between the object 610 of user, estimate the shape information (such as shape or size of the chair) of the chair.Alternatively,
The exportable simple object for representing the chair of the device, such as box etc, as shown in FIG.
Referring to Fig. 6 B, represent the object 610 of user and device 100 is output to by the object 621 and 631 that user adds
Screen.The object 621 and 631 added based on information input by user can be output to screen by device 100.
Fig. 7 A and Fig. 7 B are illustrated that the deletion object performed by device for explanation in accordance with an embodiment of the present disclosure
The schematic diagram of example.
Referring to Fig. 7 A, object 710,720 and 730 is shown on the screen of device 100.Assuming that the object shown in Fig. 7 A
710 be the object for the user for representing content.
In the object of screen is output to, it is understood that there may be the unnecessary object for user uses content.For example, such as
Fruit assume content be dancing and game, then in the object of screen is output to, it is understood that there may be represent when user is doing activity with
The object of the low main body of possibility of user's conflict.Then, user can the low main body of the possibility that conflicts with user of deleted representation
The object.
For example, even if desk and chair are present near user, since the distance between chair and user are long, even if
The possibility to conflict when user does certain activity between user and chair may also be very low.In this case, user Ke Tong
Cross the object 730 of 2910 deleted representation chair of input unit.
The object 720 that Fig. 7 B illustrate that the object 710 of user and do not deleted by user.Device 100 can not be to screen
The object 730 that curtain output will be deleted based on information input by user.
Referring back to Fig. 2, in operation 220, device 100 determines to include first by using the shape information of the first main body
The first area of the attainable point of at least a portion of main body.In addition, in operation 230, device 100 is by using the second main body
Shape information, determines the second area of the attainable point of at least a portion comprising the second main body.
For example, if it is assumed that object is the image of people, then a part for main body refers to a part for user's body, such as
Head, trunk, arm or leg of user etc.Hereinafter, for ease of description, the attainable point of at least a portion comprising main body
Region be defined as " scope of activities ".For example, comprising people by stretching his/her arm or the accessible area all put of leg
Domain can be described as " scope of activities ".
As an example, the scope of activities of main body can include the use when user is specifying remains stationary in region
The region of a part of accessible point at family.As another example, the scope of activities of main body can include working as user
The region of a part of accessible point of the user when being moved along certain paths.As another example, the movable model of main body
Enclosing can be comprising as user is specifying the region of a part of accessible point of movable the user in place.
The use when user is specifying remains stationary in region is described with reference to Fig. 8 to Fig. 9 B and Figure 11 to Figure 16 B below
The combination of a part of accessible point at family forms the example of the scope of activities of main body.
In addition, describing that the part of movable the user can in some place with user referring to FIGS. 10A and 10B below
The combination of the point reached forms the example of the scope of activities of main body.In addition, working as user with reference to Figure 17 and Figure 18 descriptions below
The combination of a part of accessible point of the user forms the example of the scope of activities of main body when being moved along certain paths.
Fig. 8 is showing for the example for the scope of activities for showing the screen output main body to device in accordance with an embodiment of the present disclosure
It is intended to.
Referring to Fig. 8, device 100 determines the scope of activities of the main body by using the shape information of main body.Device 100 considers
To the length included in the shape information of main body value (for example, if it is assumed that the main body is behaved, then the height for the people, hand
Arm lengths, leg length etc.) and determine a part of attainable point of the user when user's remains stationary in place, and lead to
Cross and identified point is mutually combined and determines the scope of activities of the user.
As an example, device 100 can be based on the mapping table of the storage in storage unit (such as storage unit 2940)
And determine the scope of activities of main body.Mapping table refers to showing the activity of the main body of the type according to the main body represented by object
Between scope and the size (if for example, the main body is people, for the height, arm length or leg length of the main body) of the main body
Ratio table.For example, mapping table can include assignor scope of activities radius be equal to the people arm length four/
Three or the people leg length 4/5ths information.It can be done on the type of user according to content in addition, mapping table can be included
Movable information.For example, the activity that user can do in football game may differ from the work that user can do in dancing and game
It is dynamic.Therefore, user participate in football game when confirmable scope of activities may differ from user participate in dancing and game when can be true
Fixed scope of activities.Mapping table can include the movable information that can be done on the type of user according to content, and for each living
The size of dynamic storage scope of activities, wherein, the body sizes of reflection user in the size of the scope of activities.Therefore, device 100
Different scopes of activities can be determined as the type of content is different.
As another example, device 100 can determine scope of activities based on the sum of length value of each several part of main body.Such as
Fruit assumes that object is the image of people, then twice of the arm length with the people corresponding length can be determined as activity by device 100
The diameter of scope.
The scope of activities determined by device 100 can be output to screen 810 by device 100.It is output to the movable model of screen 810
It is trapped among and may correspond to the diameter of a circle in the case that the scope of activities forms circle, or in the case where the scope of activities forms rectangle
It may correspond to form the length on the side of the rectangle.In other words, user can be determined based on the information 820 for being output to screen 810
Scope of activities.If representing that the object 830 of user is output to screen 810, which can be shown in the object
Neighbouring image 840.
For example, device 100 calculates the ratio between scope of activities and the length of object 830.For example, if it is assumed that and object
The height of 830 corresponding people is 175.2cm and the scope of activities of the people is 1.71m, then device 100 calculate the scope of activities with
The ratio between height of the people is:171/175.2=0.976.In addition, device 100 is by using the ratio calculated and in screen
The length value of the object 830 shown on 810, calculates the length for the image 840 that will be shown near object 830.If for example,
Assuming that the length of the object 830 shown on screen 810 is 5cm, then device 100, which calculates, to be shown near object 830
The length of image 840 is:0.976 × 5cm=4.88cm.Image 840 corresponding with the length calculated by device 100 is displayed on
On screen 810.For example, the shape of image 840 can be circular, its diameter value is equal to the length calculated by device 100.
As described in reference to fig. 8, device 100 determines the movable model of the main body based on the shape information of main body (such as user)
Enclose.Device 100 can determine the scope of activities of the main body based on configuration information input by user.In this case, determining
Can be without considering the shape information obtained by device 100 during the scope of activities of the main body.
Fig. 9 A to Fig. 9 C be illustrated that for illustrate in accordance with an embodiment of the present disclosure by device perform it is defeated based on user
The configuration information entered determines the schematic diagram of the example of the scope of activities of main body.
Referring to Fig. 9 A, object 920 is output to the screen 910 of device 100.User (such as can be inputted by input unit
Unit 2910) device 100 will be sent to for the configuration information for setting scope of activities.
For example, user can be by input unit (such as input unit 2910), according to the object 920 for being output to screen 910
Some region 930 is set.Region 930 set by the user, object 920 can be shown with the shape of circle, polygon or straight line
In the center of the shape.Region 930 set by the user can be output to screen 910 by device 100.
Referring to Fig. 9 B, device 100 can determine the scope of activities of main body based on region 930 set by the user.If for example,
Assuming that region 930 set by the user is circular, then the region with cylindrical shape can be determined as movable model by device 100
Enclose, wherein, the bottom surface of the cylinder is circle set by the user, and the length of the cylinder is corresponded to the pass the height of user
Value obtained from being multiplied by specific ratios.The ratio being multiplied with the height of user, which can be stored in, is contained in depositing in device 100
In storage unit (such as storage unit 2940).
The scope of activities determined by the device can be output to screen 910 by device 100.If scope of activities is circle,
The diameter of a circle can be output to screen 910 and be used as the scope of activities.If scope of activities is rectangular shape, the side of the rectangle
Length can be output to screen 910 and be used as the scope of activities.In other words, information 940 can be passed through to 910 output information 940 of screen
User can recognize that the size of scope of activities.If object 920 is output to screen 910, the scope of activities can be shown as by
Image 950 comprising object 920.
Referring to Fig. 9 C, device 100 can determine that the scope of activities of user 960 so as to the reflection user 960 in the scope of activities
Posture.For example, when the object 971 for representing user 960 is output, device 100 may be used on the screen of device 100 and shows
Guidance or instruction 980, request user 960 do some posture.
Device 100 can export to screen and clap the prime 972 made by user 960 and exporting in real time by camera 320
The shape 973 of the user 960 taken the photograph.Therefore, user 960 can check in real time user 960 current shape whether with prime 972
It is identical.
When the prime 972 of user 960 is taken, device 100 is by the shape information of user 960 (such as user 960
Height, arm length, leg length etc.) and user 960 prime 972 both consider together to calculate user's 960
Scope of activities.For example, if it is assumed that the width that the length of the one leg of user 960 is 1.5m and the chest of user 960 is
0.72m, then the scope of activities that device 100 can calculate user 960 corresponding with prime 972 is 3.72m, wherein, when with
Prime 972 is made when unfolding his/her arm in family 960.
Device 100 can export the value 991 of calculated scope of activities to screen, and the scope of activities for exporting user 960 is made
For by the image 974 comprising object 971.
Here, the multiple postures that will be made by user 960 can be selected according to the details of content.For example, if content is to jump
Dance is played, then device 100 can be directed to each posture in the multiple postures to be done when user 960 enjoys dancing and game, in advance
First calculate scope of activities.
In other words, if it is determined that according to first scope of activities 991 of the user 960 of prime 972, then device
100 export second 975 to screen and export the shape 977 of the user 960 shot by camera 320 in real time.Then, device
100 calculate the scope of activities 992 of user 960 according to second 975.Then, device 100 can export user near object
960 scope of activities 992 is used as image 976.
Figure 10 A and Figure 10 B are for illustrating the scope of activities that main body is determined by device in accordance with an embodiment of the present disclosure
The schematic diagram of example.
Referring to Figure 10 A, show the left side of scope of activities 1020 with the right of scope of activities 1020 relative to object 1010
Centrosymmetric example.If it is assumed that main body is people, then the model of his/her arm or leg is stretched at the people station at some position
Enclose can be the people scope of activities.Therefore, the cylinder for the trunk that center is located at the people can be determined as movable model by device 100
Enclose 1020.
Referring to Figure 10 B, show the left side of scope of activities 1040 with the right of scope of activities 1040 relative to object 1030
The asymmetric example in center.If it is assumed that main body is people, then the movement of the people may not be symmetrical.For example, as in Figure 10 B
Shown, if the people moves forward one leg, another one leg is held in place, the left side and the human body of the human body
The right may not be symmetrical relative to the center of the human body.
Therefore, device 100 can based on each several part of main body movable main body in some region it is attainable farthest
The combination of point, determines the scope of activities of the main body.
With reference to as described in Fig. 8 to Figure 10 B, device 100 can obtain the shape information of main body and by using the shape information
To determine the scope of activities of the main body.In addition, device 100 can determine the scope of activities of main body based on the setting of user.Device
100 can obtain the shape information of each in multiple main bodys, and determine scope of activities for each in multiple main bodys.
Hereinafter, activity is determined with reference to each in multiple main bodys that is directed to that Figure 11 to Figure 16 B descriptions are performed by device
The example of scope.
Figure 11 is illustrated that the shape for illustrating the multiple main bodys of acquisition performed by device in accordance with an embodiment of the present disclosure
The schematic diagram of the example of state information.
Referring to Figure 11, the example of multiple users 1110 and 1120 is shown.For ease of description, altogether two are shown in Figure 11
A user 1110 and 1120, but 1110 and 1120 not limited to this of multiple users.
Device 100 obtains the shape information of each in multiple users 1110 and 1120.With reference to Fig. 3 to Fig. 4 B describe by
Device 100 obtains the example of the shape information of each in multiple users 1110 and 1120.For example, device 100 can be by using
Data corresponding with the image captured by camera 320, obtain the shape information of each in multiple users 1110 and 1120.Camera
320 can capture images so that it is all that the image includes multiple users 1110 and 1120.Camera 320 can also be captured to be used comprising first
First image at family 1110, then capture include the second image of second user 1120.
It is each in multiple users that Figure 12 is illustrated that the screen to device 100 of display in accordance with an embodiment of the present disclosure exports
A shape information and the schematic diagram of the example of scope of activities.
Referring to Figure 12, the shape information of the first user and the shape information and activity of scope of activities 1220 and second user
Scope 1230 can be output to screen 1210.The work for determining the first user performed by device 100 is described with reference to Fig. 8 to Figure 10
The example of the scope of activities of dynamic scope and second user.In fig. 12, it is assumed that there are two users altogether.However, as described above,
The quantity of user is unlimited.Therefore, be output to screen 1210 shape information and scope of activities may correspond to user quantity and
Increase or decrease.
In addition, Figure 12 shows the shape information and work of the shape information and scope of activities 1220 of the first user with second user
Dynamic scope 1230 is output substantially simultaneously, but the output not limited to this.For example, the shape information and scope of activities 1220 of the first user with
The shape information and scope of activities 1230 of second user can alternately be exported according to the passage of time.
Figure 13 A are illustrated that the example that multiple objects are exported on the screen of device shown in accordance with an embodiment of the present disclosure
Schematic diagram.
Referring to Figure 13 A, multiple objects 1320 and 1330 can be output to screen 1310 by device 100.Therefore, respectively with object
The point that 1320 and 1330 corresponding each main bodys are presently in can be checked in real time.
Device 100 can show the scope of activities 1340 and 1350 of each main body together with object 1320 and 1330.Therefore, may be used
Current location based on each main body and check whether the scope of activities of each main body mutually overlapping in real time.
If the scope of activities 1340 and 1350 of each main body is mutually overlapping, device 100 can not perform content.For example, such as
Fruit content is computer game, then device 100 can not perform the computer game.Hereinafter, with reference to Figure 13 B more detailed descriptions
This situation.
Figure 13 B are illustrated that the device of display in accordance with an embodiment of the present disclosure does not perform the schematic diagram of the example of content.
Referring to Figure 13 B, the image that object 1320 and 1330 is each user for participating in computer game is described.If first
The scope of activities 1340 of user 1320 is overlapping with the scope of activities 1350 of second user 1330, then device 100 can not perform the meter
Calculate machine game.
For example, device 100 can be shown on screen 1310 image 1360 or output sound with indicate scope of activities 1340 with
Scope of activities 1350 is overlapping, then can not perform the computer game.As the first user 1320 or second user 1330 move,
If scope of activities 1340 is not overlapped with each other with scope of activities 1350, hereafter device 100 can perform the computer game.
Hereinafter, the scope of activities of the user of content is determined with reference to Figure 14 A to Figure 16 B descriptions and without using the content
People, animal or things scope of activities example.
Figure 14 A and Figure 14 B are illustrated that for illustrating that the acquisition performed by device in accordance with an embodiment of the present disclosure is multiple
The shape information of main body and the schematic diagram for determining the example of the scope of activities of the multiple main body.
Represent the use using content respectively referring to the multiple main bodys 1410 and 1420 shown in Figure 14 A and Figure 14 B, Figure 14 A
Family 1410 and the non-user 1420 without using the content.Non-user 1420 is likely to be present in the region of user 1410.Example
Such as, if it is assumed that the content is computer game, then user 1410 refers to participating in the people of the computer game, and non-user
1420 refer to being not involved in the people of the computer game.
Device 100 obtains the shape information of user 1410 and the shape information of non-user 1420, and determines 1410 He of user
1420 respective scope of activities of non-user.As described above, device 100 can obtain user by the data sent from camera 320
1410 and 1420 respective shape information of non-user.
Figure 14 B show to be output to 1420 respective form of the user 1410 of the screen 1430 of device 100 and non-user.Device
100 can by the scope of activities 1450 of the scope of activities 1440 of user 1410 and non-user 1420 with represent user 1410 object and
Represent that the object of non-user 1420 is shown together.Therefore, current location that can be based on user 1410 and non-user 1420, in real time inspection
Whether the scope of activities for looking into user 1410 and non-user 1420 is mutually overlapping.
Figure 15 A and Figure 15 B are illustrated that for illustrating that the acquisition performed by device in accordance with an embodiment of the present disclosure is multiple
The shape information of main body and the schematic diagram for determining another example of the scope of activities of the multiple main body.
Referring to Figure 15 A and Figure 15 B, multiple main bodys 1510 and 1520 represent user 1510 and the animal 1520 of content respectively.
Device 100 obtains 1520 respective shape information of user 1510 and animal, and calculates scope of activities.As described above,
Device 100 can obtain 1520 respective shape information of user 1510 and animal by using camera 320.
Figure 15 B show to be output to 1520 respective form of the user 1510 of the screen 1530 of device 100 and animal.Device
100 can be by the object and table of the scope of activities 1540 of user 1510 and the scope of activities 1550 of animal 1520 with representing user 1510
Show that the object of animal 1520 is shown together.
Figure 16 A and Figure 16 B are illustrated that for illustrating that the acquisition performed by device in accordance with an embodiment of the present disclosure is multiple
The shape information of main body and the schematic diagram for determining the example of the scope of activities of the multiple main body.
Referring to Figure 16 A and Figure 16 B, multiple main bodys 1610,1620 and 1630 refer to user 1610 and the things of content respectively
1620 and 1630.In Figure 16 A, things 1620 and 1630 is shown as the obstacle being present in the region of user 1610
Thing, such as furniture etc.
Device 100 obtains user 1610 and the respective shape information of barrier 1620 or 1630, and calculates scope of activities.Such as
Upper described, device 100 can be believed by using camera 320 to obtain user 1610 and the respective form of barrier 1620 or 1630
Breath.
Figure 16 B show to be output to the user 1610 of the screen 1640 of device 100 and the respective shape of barrier 1620 or 1630
State.Device 100 can be by the scope of activities 1650 of user 1610 among user 1610 and barrier 1620 or 1630 with representing user
1610 object and the object of expression barrier 1620 or 1630 are shown together.
Figure 17 is illustrated that the shape information of the acquisition main body performed by device of illustration in accordance with an embodiment of the present disclosure simultaneously
Determine the flow chart of the example of the scope of activities of the main body.
Referring to Figure 17, each operation being used for by device 100 as shown in Figure 29 or as shown in Figure 31 performs content
Equipment 101 is chronologically handled.It should therefore be understood that it can also be applied to referring to figs. 1 to 16 descriptions provided with reference to Figure 17 descriptions
Operation, even if it is also such that these descriptions, which are not provided herein,.
It is in addition, substantially the same with the operation 210 described with reference to Fig. 2 with reference to the operation 1710 that Figure 17 is described.Therefore, here
Detailed description on operation 1710 is not provided.
In operation 1720, device 100 predicts the mobile route of the first main body and the mobile route of the second main body.
First main body and the second main body can be the users of content, which can be requirement User Activity and mobile trip
Play.For example, if it is assumed that the content is dancing and game or combat game, it is understood that there may be situations below:According to by the detailed of the content
The instruction that feelings are made, user in same place activity or may must be moved to another place.
The details of 100 analysing content of device, and the details based on the content analyzed and predict the mobile road of the first object
Footpath and the mobile route of the second object.For example, device 100 can be deposited by reading in storage unit (such as storage unit 2940)
The details of the content of storage and analyze the details of the content.Therefore, the type of the content used regardless of user, device 100 is all
It can prevent from conflicting between main body.
If the first main body be the user of content and the second main body be content non-user, device 100 only predicts the user
Mobile route.In other words, device 100 does not predict the mobile route of non-user.Can be before the content be performed by the first master
Body is set in advance as user and the second main body is set in advance as non-user.Therefore, device 100 can determine that the first main body and second
Which main body is user in main body.
In operation 1730, shape information of the device 100 based on the first main body and mobile route determine first area.Change sentence
Talk about, shape information of the device 100 based on the first main body and mobile route determine the scope of activities of the first main body.
In operation 1740, shape information of the device 100 based on the second main body and mobile route determine second area.Change sentence
Talk about, shape information of the device 100 based on the second main body and mobile route determine the scope of activities of the second main body.If second
Main body is the non-user of content, then device 100 can determine the work of the second main body by the shape information using only the second main body
Dynamic scope.
Hereinafter, describe to determine the work of the main body by shape information of the device based on main body and mobile route with reference to Figure 18
The example of dynamic scope.
Figure 18 is illustrated that for illustrating that the form based on main body performed by device in accordance with an embodiment of the present disclosure is believed
Breath and mobile route determine the schematic diagram of the example of the scope of activities of the main body.
Referring to Figure 18, the first user 1810 moved from left to right and the second user moved from right to left are shown
1820。
Device 100 can determine the first user 1810 the first user's 1810 based on the shape information of the first user 1810
The scope of activities 1831 of initial position.In other words, device 100 can determine that when the first user 1810 is in initial position remains stationary
When the first user 1810 scope of activities 1831.
There may be situations below:According to the details of the content performed by device 100, user is possible must be along specific direction
It is mobile.It also likely to be present situations below:According to the details of the content, user is possible when the user is moving about must do specific work
It is dynamic.If it is assumed that the first user 1810 must do specific activities when moving from left to right, then device 100 is determined in the first user
The scope of activities 1832 and 1833 of each position of the first user 1810 on 1810 mobile route.
Device 100 can determine the first user 1810 most by merging all determined scopes of activities 1831 to 1833
Whole scope of activities 1830.
Device 100 can determine the by using the identical method of method of the scope of activities with determining the first user 1810
The scope of activities of two users 1820.In other words, device 100 determines the scope of activities in the initial position of second user 1820
1841, and determine the scope of activities 1842 of each position of the second user 1820 on the mobile route of second user 1820 to
1844.In addition, device 100 can determine the final of second user 1820 by scope of activities 1841 to 1844 determined by merging
Scope of activities 1840.
Device 100 is contemplated that 1810 and 1820 activity to be done of user when user 1810 and 1820 is moving, really
Determine scope of activities 1831 to 1833 and scope of activities 1841 to 1844.For example, device 100 can be by using in storage unit
Mapping table of storage calculates scope of activities in (such as storage unit 2940).Activity Type according to required by content, mapping
Table include with except through the necessary activity using the shape information of user 1810 and 1820 and beyond definite scope of activities
The related information of scope.For example, if the activity required by content, which is user, stretches an arm when stepping a step with a foot
Activity, then the mapping table can include instruction and additionally need equivalent to the shape information by using user 1810 and 1820 and true
The information of 1.7 times of scope of activities of fixed scope of activities.
The example that user 1810 and 1820 moves in two-dimentional (2D) space is described in figure 18, but user moves wherein
Dynamic space not limited to this.In other words, it is possible to there are situations below:According to the details of content, user 1810 and 1820 may
It must move in the 3 d space.Therefore, in the case that user 1810 and 1820 will move in the 3 d space, device 100
Also 1810 and 1820 respective scope of activities of user can be determined according to the method described with reference to Figure 18.
Referring back to Fig. 2, in operation 240, device 100 is based on whether the first scope mutually overlaps with the second scope, prediction
Whether the first main body can collide with each other with the second main body.In other words, scope of activities of the device 100 based on the first main body whether with
The scope of activities of second main body overlaps, and whether the first main body of prediction can collide with each other with the second main body.Predict the first main body and the
Whether two main bodys, which can collide with each other, refers to predict the first main body and second when the first main body and the second main body do not collide with each other
The possibility to conflict between main body.For example, if the value of the difference of the scope of activities of the scope of activities of the first main body and the second main body is small
In some value, device 100 can determine that the first main body can be collided with each other with the second main body.
Hereinafter, with reference to Figure 19 description by device perform the first main body of prediction whether the example that can conflict with the second main body
Son.
Figure 19 be illustrated that for illustrate in accordance with an embodiment of the present disclosure by the first main body of prediction that device performs and the
Two main bodys whether can conflicting example flow chart.
Referring to Figure 19, each operation being used for by device 100 as shown in Figure 29 or as shown in Figure 31 performs content
Equipment 101 is chronologically handled.It should therefore be understood that the behaviour that can also be applied to describe with reference to Figure 19 with reference to Fig. 1 descriptions provided
Make, even if it is also such that these descriptions, which are not provided herein,.
In operation 1910, device 100 calculates the beeline between the first main body and the second main body.In view of the first main body
Scope of activities and the scope of activities of the second main body calculate the beeline.In more detail, work of the device 100 from the first main body
Select in each point that dynamic scope is included at first point near the second main body.In addition, movable model of the device 100 from the second main body
Enclose the second point selected in included each point near the first main body.In addition, device 100 calculates at first point between second point
Distance, and the distance calculated is determined as the beeline between the first main body and the second main body.
In operation 1920, device 100 determines whether the beeline is more than predetermined distance value.The predetermined distance value can
To be the value that is stored in advance in storage unit (such as storage unit 2940) or by value input by user.
Hereinafter, with reference to Figure 20 A to Figure 20 C describe by device perform by beeline compared with predetermined distance value
Compared with example.
Figure 20 A to 20C be illustrated that for illustrate in accordance with an embodiment of the present disclosure by device perform will be intersubjective
The schematic diagram of example of the beeline compared with predetermined distance value.
Referring to Figure 20 A, show that the scope of activities 2010 of the first user is mutually handed over the scope of activities 2020 of second user
Folded example.In other words, the scope of activities 2010 of the first user includes the scope of activities 2020 of second user.
In this case, the value of the beeline between the first user calculated by device 100 and second user is 0.
In other words, the situation that the value of the beeline is 0 includes:The scope of activities 2010 of first user and the movable model of second user
Enclose 2020 overlapping situations;And first the scope of activities 2010 of user connect in the scope of activities 2020 of certain point and second user
Tactile situation.
Therefore, if the value of the beeline between the first user and second user is 0, device 100 determines that this is most short
The value of distance is less than predetermined distance value.
Referring to Figure 20 B, the schematic diagram shown shows that the value of the beeline between user is the situation of m.Here, it is predetermined
Distance value k is assumed the value more than m.
The scope of activities 2030 of first user is not overlapped with each other with the scope of activities 2040 of second user, not a certain yet
Point contacts with each other.Device 100 is selected near second user from each point that the scope of activities 2030 of the first user is included
First point, and second point of the selection near the first user from each point that the scope of activities 2040 of second user is included.So
Afterwards, device 100 will be determined as the beeline m the first user and second user from first point to the distance of second point.
Since beeline m is less than predetermined distance value k, device 100 performs the operation 1930 shown in Figure 19.
Referring to Figure 20 C, shown schematic diagram shows that the value of the beeline between user is the situation of n.Here, in advance
Fixed distance value k is assumed the value less than n.
The scope of activities 2050 of first user is not overlapped with each other with the scope of activities 2060 of second user, not a certain yet
Point contacts with each other.Device 100 is selected near second user from each point that the scope of activities 2050 of the first user is included
First point, and second point of the selection near the first user from each point that the scope of activities 2060 of second user is included.So
Afterwards, device 100 will be determined as the beeline n the first user and second user from first point to the distance of second point.
Since beeline n is more than predetermined distance value k, device 100 performs the operation 1940 shown in Figure 19.
Referring back to Figure 19, if beeline is more than predetermined distance value k, the is determined operating 1940 devices 100
One main body is not conflicted with each other with the second main body.Here, the situation that the first main body and the second main body do not conflict with each other include with
Lower situation:Even if the first main body or the second main body do the activity different from current active, the first main body also can not with the second main body
It can collide with each other.In addition, if the beeline is less than predetermined distance value k, then definite first master of 1930 devices 100 is being operated
Body can be collided with each other with the second main body.In this case, the first main body and the second main body can conflicting situation including with
Lower situation:If the first main body or the second main body do the activity different from current active, there are the first main body and the second main body
Conflicting possibility.
Figure 21 A to Figure 21 C be illustrated that display in accordance with an embodiment of the present disclosure determine that main body can collide with each other in device
In the case of be output to the device screen image example schematic diagram.
Figure 21 A and Figure 21 B show to export the example of dynamic object (such as representing the image of user) to screen 2110.Figure
21C is shown to screen output dynamic object (such as representing image of user) and static object (such as image of expression furniture)
Example.
Referring to Figure 21 A to Figure 21 C, if prediction main body can collide with each other, the exportable instruction main body of device 100 can be mutual
The warning message of conflict.Warning message can be from light, color or certain image of the output of the screen of device 100, or from bag
Sound containing loudspeaker output in the device 100.In addition, if device 100 is carrying out content, then information by way of caution
Example, device 100 can suspend execution content.
For example, device 100 can be to the image 2120 and 2130 of 2110 output indication warning message of screen.As an example
Son, referring to Figure 21 A, device 100 can ask a user by the high image 2120 of the possibility to conflict between output indication user
Move away from the place of another user.Even if the scope of activities 2150 of the scope of activities 2140 of the first user and second user
Do not overlap with each other, if the value of the beeline between scope of activities 2140 and scope of activities 2150 is less than predetermined distance value k,
Then also the first user of the exportable instruction image 2120 high with the possibility to conflict between second user of device 100.
As another example, referring to Figure 21 B, device 100 can suspend the content for performing and being performed at present, while output refers to
Show the very high image 2130 of the possibility to conflict between user.If the work of the scope of activities 2140 of the first user and second user
Dynamic scope 2150 is mutually overlapping, then device 100 can suspend the content for performing and being performed at present, while output image 2130.
Referring to Figure 21 C, if chair 2180 is located in the scope of activities 2170 of user, 100 exportable request of device will
Chair 2180 removes the image 2160 of the scope of activities 2170 of user.
After pause performs content, if the scope of activities of each main body becomes away from each other so that the distance between they
Value is more than predetermined value, then device 100 re-executes the content.Hereinafter, described with reference to Figure 21 D by device execution temporary
Stop recovering to perform the example of content after performing content.
Figure 21 D be illustrated that display in accordance with an embodiment of the present disclosure by device perform pause perform content after recover
Perform the schematic diagram of the example of content.
Referring to Figure 21 D, when performing content, if the first user 2191 of prediction can collide with each other with second user 2192,
Then device 100, which can suspend, performs the content, and the first user of output indication 2191 can conflicting figure with second user 2192
As 2195.When pause performs the content, camera 320 also persistently shoots the first user 2191 and second user 2192.Cause
This, device 100 can check that the distance after pause performs content between the first user 2191 and second user 2192 is that increase is gone back
It is to reduce.
After pause performs content, if the first user 2191 and/or second user 2192 are moved from current location, he
The distance between may increase.In other words, the first user 2191 may be moved along some direction so that the first user 2191
Second user 2192 is become far from, or second user 2192 may be moved along some direction so that second user 2192 becomes remote
From the first user 2191.With at least one movement in the first user 2191 and second user 2192, if the first user 2191
The value of the distance between the scope of activities 2194 of scope of activities 2193 and second user 2192 go above predetermined value, then fill
Putting 100 can recover to perform content.In other words, with least one movement in the first user 2191 and second user 2192, such as
Fruit determines that the first user 2191 and second user 2192 can not possibly collide with each other, then device 100 can recover to perform content.This
In the case of, device 100 can recover to perform the image 2196 of content to screen output indication.
As described above, device 100 can the shape information based on each main body and determine scope of activities, and whether predict each main body
It can collide with each other.Therefore, device 100 can prevent from conflicting between main body in advance.
Figure 22 be illustrated that for illustrate in accordance with an embodiment of the present disclosure by device 100 perform will be intersubjective most short
The schematic diagram of example of the distance compared with predetermined distance value.
Referring to Figure 22, show the first main body 2210 and the second main body 2220 be all content user.However, the first main body
2210 and 2220 not limited to this of the second main body.In other words, the second main body 2220 can be the non-user of content, or can correspond to
In the things of such as animal, plant or furniture etc.
As described above with reference to Figure 18, the mobile route that device 100 can be based on the first user 2210 and second user 2220
And first is at least one in user 2210 and the activity to be done of second user 2220, the scope of activities of the first user 2210 is determined
2230 and the scope of activities 2240 of second user 2220.Scope of activities 2230 and second of the device 100 based on the first user 2210
The scope of activities 2240 of user 2220, calculates the beeline k between the first object 2210 and the second object 2220, and is based on being somebody's turn to do
Beeline k, predicts the possibility to conflict between the first user 2210 and second user 2220.Above with reference to Figure 19 to Figure 20 C
The method for describing the possibility to conflict between the prediction user performed by device 100.
Figure 23 A to Figure 23 C be illustrated that display in accordance with an embodiment of the present disclosure determine that user can collide with each other in device
In the case of be output to the device screen image example schematic diagram.
Referring to Figure 23 A to Figure 23 C, if prediction user can collide with each other, device 100 can be to 2310 output indication of screen
Object can conflicting image 2320.As an example, as shown in Figure 23 A, device 100 can export logical to screen 2310
Know the image 2320 of user's potentially conflicting between them.As another example, as shown in Figure 23 B, device 100, which can suspend, to be held
Row content, while to the image 2330 of 2310 output notice user of screen potentially conflicting between them.In image 2320 and 2330
After being output to screen 2310, if user resets their position, device 100 is predicted again based on the position after readjustment
The possibility to conflict between user.As shown in figure 23 c, if it is determined that can not possibly conflict between user, then device 100 can continue
Content is performed without to 2310 output image 2320 and 2330 of screen.
With reference to as described in Fig. 2 to Figure 23 C, device 100 can be predicted based on the scope of activities of main body conflict between main body can
Can property.Device 100 can be in installation space place of safety or hazardous area.Then, if main body enters danger beyond place of safety or main body
Danger zone, then 100 exportable warning message of device.
Hereinafter, the example of hazardous area or place of safety is set with reference to Figure 24 descriptions by device.In addition, described with reference to Figure 25
The example of the output warning message performed in the case where main body enters hazardous area beyond place of safety or main body by device.
Figure 24 is illustrated that for illustrating the setting place of safety performed by device or danger in accordance with an embodiment of the present disclosure
The schematic diagram of the example in area.
Referring to Figure 24, the example that the image for showing a space is exported from device 100 to screen 2410 is shown.Here institute
The space stated refers to the space shot by camera 320.Device 100 can be by using the data sent from camera 320, Xiang Ping
The image in 2,410 one space of output indication of curtain.In addition, device 100 can by the spatial classification and be arranged to place of safety 2420 or danger
Danger zone 2430.
As an example, device 100 can be based on information input by user and set place of safety 2420 and hazardous area 2430.
User can will be output to screen 2410 for the spatial classification to be input to for the information of place of safety 2420 or hazardous area 2430
Image.For example, user can select some region in the images, and selected region is appointed as place of safety 2420 or danger
Danger zone 2430.If selected region is appointed as place of safety 2420 by user, place of safety is removed in space in the images
Remaining area beyond 2420 is confirmed as hazardous area 2430.
As another example, device 100 space automatically can be appointed as place of safety 2420 or hazardous area 2430 and nothing
Need user intervention.For example, space empty existing for no things in the image can be appointed as place of safety 2420 by device 100,
And there will be space existing for things to be appointed as hazardous area 2430 in the image.
After space is appointed as place of safety 2420 or hazardous area 2430 by device 100, if main body exceeds place of safety 2420
Or enter hazardous area 2430, then 100 exportable warning message of device.Hereinafter, performed with reference to Figure 25 descriptions by device 100
Export the example of warning message.
Figure 25 be illustrated that for illustrate in accordance with an embodiment of the present disclosure enter danger beyond place of safety or main body in main body
The schematic diagram of the example of the output warning message performed in the case of danger zone by device.
Referring to Figure 25, show and place of safety 2520 and danger are provided with the image of screen 2510 of device 100 is output to
The example of danger zone 2530.Figure 25 shows that the border 2540 between place of safety 2520 and hazardous area 2530 is displayed on screen 2510
Example.However, border 2540 can be not displayed.
If by the main body that object 2550 represents beyond place of safety 2520 (or entering hazardous area 2530), device 100 can
Export warning message.For example, if it is assumed that the main body be play dancing and game user, then when playing dancing and game the user body
A part for body exceeds the feelings of place of safety 2520 (if the object 2550 for, being output to screen 2510 exceeds place of safety 2520)
Under condition, 100 exportable warning message of device.The warning message can be the light from the output of the screen 2510 of device 100, color,
Certain image etc., or the sound exported from the loudspeaker included in the device 100.In addition, if in device 100 is carrying out
Hold, then the example of information, device 100 can suspend execution content by way of caution.
For example, if user exceeds place of safety 2520, device 100 can show that instruction object 2550 moves on to place of safety 2520
Image 2560, or execution content can be suspended.
Some things that device 100 can will be present in space are appointed as peril, and main body it is very close this
Warning message is exported in the case of a little perils.
Hereinafter, describe to specify the example of peril by device with reference to Figure 26.In addition, described with reference to Figure 27 in main body
The example of the output warning message performed in the case of close peril by device.
Figure 26 is illustrated that the example for illustrating the specified peril performed by device in accordance with an embodiment of the present disclosure
The schematic diagram of son.
Referring to Figure 26, the example that the image for showing a space is exported to screen 2610 performed from device 100 is shown.
The space refers to the space shot by camera 320.Device 100 can be by using the data sent from camera 320, to screen 2610
Output shows the image in a space.In addition, some things that device 100 can will be present in the space are appointed as peril
Group 2620.
As an example, device 100 can specify peril group 2620 based on information input by user.User can incite somebody to action
Information for specifying peril group 2620 is input to the image for being output to screen 2610.For example, user can be from the image
Middle selection something or other, and selected things is appointed as peril group 2620.
As another example, device 100 can automatically specify peril group 2620 without user intervention.For example,
All things present in the space shown in the picture can be appointed as peril group 2620 by device 100.Alternatively, device
Things with the feature for meeting preassigned can be appointed as peril group 2620 by 100.For example, device 100 will can have
All objects of sharpened surface or wedge angle are appointed as peril group 2620.
After device 100 specifies peril group 2620, if main body, close to peril group 2620, device 100 can
Export warning message.Hereinafter, the example of the output warning message performed with reference to Figure 27 descriptions by device.
Figure 27 is for illustrating being held in the case where main body is close to peril by device in accordance with an embodiment of the present disclosure
The schematic diagram of the example of capable output warning message.
Referring to Figure 27, the example that peril 2720 is specified in the image of screen 2710 of device 100 is output to is shown
Son.If the very close peril 2720 of the object 2730 for being output to screen 2710 is (if in fact, by 2730 table of object
The very close peril 2720 of main body shown), then the exportable warning message 2740 of device 100.For example, if baby connects very much
Nearly peril 2720, the then exportable warning message 2740 of device 100.The example of warning message is described above with reference to Figure 25.
As explained above with described in Fig. 2 to Figure 27, device 100 can independently export warning message.However, output warning message is not
It is limited to this.In other words, if warning message will be output, warning message signal can be transmitted to another device in device 100.
Figure 28 be illustrated that for illustrate in accordance with an embodiment of the present disclosure from device perform to another device send warn
Accuse the schematic diagram of the example of information.
Referring to Figure 28, device 100 can independently export warning message or send warning message to another device 2800.Example
Such as, if being output to the very close peril 2820 of object 2810 of the expression baby of the screen of device 100, device
100 can independently export warning image 2830, and send warning message to another device 2800 being connected with device 100 at the same time.So
Afterwards, the exportable warning image 2840 of another device 2800.Here, device 100 and another device 2800 can by using wired or
Wireless communications method and be connected with each other.
Figure 28 shows the example of warning image 2830 and 2840 information by way of caution, but the example of warning message is not limited to
This.Device 100 and another device 2800 are exportable as an example with reference to the warning message of Figure 25 descriptions.
Figure 29 is the block diagram of the example of device in accordance with an embodiment of the present disclosure.
Referring to Figure 29, device 100 includes input unit 2910, control unit 2920 and output unit 2930.
The device 100 shown in Figure 29 includes being used to perform preventing between multiple main bodys above with reference to what Fig. 1 to Figure 28 was described
The component of the method for conflict.It should therefore be understood that it can also be applied to what is shown in Figure 29 referring to figs. 1 to Figure 28 descriptions provided
Device 100, even if it is also such that these descriptions, which are not provided herein,.
The device 100 shown in Figure 29 only includes the component described with reference to current embodiment.Therefore, this area is common
Technical staff is understood that, in addition to the component shown in Figure 29, may also include other universal components.
Input unit 2910 receives the image captured by camera 320 from camera 320.For example, input unit 2910 may include
Wired communication interface or wireless communication interface.Input unit 2910 can by wired communication interface and wireless communication interface at least
One, image is received from camera 320.
The wired communication interface may include high-definition media interface (HDMI), digital visual interface (DVI) etc., but be not limited to
This.
The wireless communication interface may include that bluetooth-communication unit, low-power consumption bluetooth (BLE) communication unit, short haul connection connect
Mouth, Wi-Fi communication units, ZigBee communication unit, Infrared Data Association (IrDA) communication unit, Wi-Fi direct (WFD) communication
Unit, ultra wide band (UWB) communication unit or Ant+ communication units, but not limited to this.
The wireless communication interface can be on mobile communications network to/from base station, exterior terminal (such as camera 103) and service
At least one transmission/reception wireless signal in device.The wireless signal may include voice-call-signaling, video phone call signal or
Person is used to the various forms of data of transmitting-receiving text or Multimedia Message.
Input unit 2910 includes being used for the unit that input data causes user controllable device processed 100.For example, input unit
2910 may include that keyboard, dome switch, touch pad (can be capacitance cover type, resistance cover type, infrared light beamforming, surface sound
Wave mode, integration strain gauge type or piezo-electric type), jog wheels or microswitch, but not limited to this.
Control unit 2920 obtains the shape information of the first main body and the shape information of the second main body.As an example,
If it is assumed that performing content, then the first main body may refer to the user of content.Second main body may refer to together with the first main body
Using another user of content, or it can refer to the non-user without using content.Second main body can be animal, plant or such as
The things of furniture etc.Content described herein refers to requiring the movable program of user.For example, the activity based on user and
The computer game of execution such as dancing and game or motor play etc may correspond to the content.
Shape information refers to the information of the form of instruction main body.Length and volume and main body of the form including main body
Shape.As an example, if it is assumed that main body is people, then shape information includes the full detail of the shape of assignor, such as
The height of people, arm length, leg length, trunk thickness, arm thickness, leg thickness etc..If it is assumed that main body is chair, then shape
State information includes the full detail for the shape for indicating the chair, the height of such as chair, width etc..
Control unit 2920 determines the scope of activities of the first main body by using the shape information of the first main body, by making
The scope of activities of the second main body is determined with the shape information of the second main body.Scope of activities refers to including at least one of main body
Divide the scope of attainable point.As an example, the scope of activities of main body can be included when the main body is maintained at specified area
The scope of a part of accessible point of main body when in domain.As another example, the scope of activities of main body can be included
The scope of a part of accessible point of the main body when the main body is moved along certain paths.As another example,
The scope of activities of main body can be comprising with a part of accessible point of the main body movable main body in some region
Scope.
If content requires user's movement, control unit 2920 can determine the work of user based on the mobile route of user
Dynamic scope.In addition, control unit 2920 can determine the scope of activities of main body based on information input by user.
Whether scope of activities of the control unit 2920 based on the first main body and the scope of activities of the second main body are mutually overlapping, in advance
Survey whether the first main body and the second main body can collide with each other.Predict whether the first main body and the second main body can collide with each other to refer to
It is that the possibility to conflict between the first main body and the second main body is predicted when the first main body is not collided with each other with the second main body.Example
Such as, if the value of the difference of the scope of activities of the scope of activities of the first user and second user is less than some value, control unit
2920 can determine that the first user collides with each other with second user.
Output unit 2930 is defeated to the screen output image of device 100 or by the loudspeaker in device 100
Go out warning message.For example, output unit 2930 can export the object for representing main body to screen, and it is defeated by the screen or loudspeaker
Go out caution signal.
In addition, operation input unit 2910, control unit 2920 and output unit 2930 can be come by using software module
Whole or a portion, but the operation not limited to this of input unit 2910, control unit 2920 and output unit 2930.
In addition, can be by one or more processors come operation input unit 2910, control unit 2920 and output unit
2930, but the operation not limited to this of input unit 2910, control unit 2920 and output unit 2930.
Figure 30 is the block diagram of the example of device in accordance with an embodiment of the present disclosure.
The device 100 shown in Figure 30 includes being used to perform preventing between multiple main bodys above with reference to what Fig. 1 to Figure 28 was described
The component of the method for conflict.It should therefore be understood that it can also be applied to what is shown in Figure 30 referring to figs. 1 to Figure 28 descriptions provided
Device 100, even if it is also such that these descriptions, which are not provided herein,.
The device 100 shown in Figure 30 only includes the component described with reference to current embodiment.Therefore, this area is common
Technical staff is understood that, in addition to the component shown in Figure 30, may also include other universal components.
Referring to Figure 30, control unit 2920 read and analyze the content stored in storage unit 2940 included it is detailed
Feelings.For example, if it is assumed that main body includes the user of content and is carrying out the content, then control unit 2920 should by analysis
The details that are included in content and obtain the information of the mobile route on the main body.In addition, control unit 2920 is by using institute
The information of acquisition determines the scope of activities of main body.Another example of operation control unit 2920 is described above with reference to Figure 29
Son.
Control unit 2920 generates warning message.In more detail, if it is determined that main body can collide with each other, then control unit
2920 generation warning messages.Warning message can be the light from the output of the screen of device 100, color, certain image etc., Huo Zhecong
Include the sound of loudspeaker output in the device 100.In addition, if the device is carrying out content, then information by way of caution
Example, device 100 can suspend execution content.
Storage unit 2940 stores the data of the shape information and scope of activities on main body.In addition, storage unit 2940
Storage is in order to determine the scope of activities of main body and required mapping table.Storage unit 2940 stores the content performed by device 100
Details.
Figure 31 is the block diagram for being used to perform the example of the system of content in accordance with an embodiment of the present disclosure.
Referring to Figure 31, system 1 includes being used for equipment 101, display device 102 and the camera 103 for performing content.If it is assumed that
The content is computer game, then the equipment 101 for being used to perform content refers to game console.
Equipment 101, display device 102 and camera 103 for performing content can be connected with each other by cable, and pass through this
Cable mutual transceiving data (i.e. by using wire communication method).Alternatively, equipment 101, display device for performing content
102 and camera 103 can by using wireless communications method mutual transceiving data.In the following, it is described that included in for performing
Input unit 3110 and output unit 3130 in the equipment 101 of content.However, correspond to input unit 3110 and output unit
3130 component can be separately contained in camera 103 and display device 102.
Camera 103 captures the image (i.e. object) of main body, and the image captured is sent to for performing setting for content
Standby 101.The example of operation camera 103 is described above with reference to Fig. 1 to Figure 28.
Input unit 3110, control unit 3120 and storage unit included in the equipment 101 for performing content
3140 operation is described with reference to Figure 29 to Figure 30.Therefore, its detailed description is not provided here.
Output unit 3130 sends the image or warning message for the form for showing object to display device 102.For example, output
Unit 3130 may include wired communication interface or wireless communication interface.Output unit 3130 can be by least one in above-mentioned interface
The image or warning message are sent to display device 102.
The wired communication interface may include HDMI, digital visual interface etc., but not limited to this.
The wireless communication interface may include bluetooth communication interface, low-power consumption bluetooth (BLE) communication interface, near-field communication
(NFC) interface, Wi-Fi communication interfaces, ZigBee communication interface, Infrared Data Association (IrDA) communication interface, Wi-Fi direct
(WFD) communication interface, ultra wide band (UWB) communication interface or Ant+ communication interfaces, but not limited to this.
In addition, the wireless communication interface can on mobile communications network with base station, exterior terminal (such as display device 102)
With at least one transmitting/receiving wireless signal in server.The wireless signal may include voice signal, video phone call signal or
It is used to the various forms of data of transmitting-receiving text or Multimedia Message.
Display device 102 is exported from the described image or warning message received for performing the equipment 101 of content.
As described above, can base according to one or more above-described embodiments, device 100 or the equipment 101 for performing content
The scope of activities for including the attainable point of each main body is determined in the shape information of each main body, and predicts that these main bodys whether can
Collide with each other.Therefore, device 100 or can prevent from conflicting between main body in advance for performing the equipment 101 of content.It is in addition, if pre-
Survey main body between can collide with each other, then device 100 or for perform the equipment 101 of content can generate warning message or pause perform in
Hold.
Moreover it is possible to by medium (such as computer-readable medium)/on computer readable code/instructions realize
Other embodiments, to control at least one treatment element to realize any of the above described embodiments.The medium may correspond to allow to store
And/or transmit any medium of the computer-readable code.
Can the record/transmit the computer-readable code on medium in a variety of ways, the example of medium includes:Record is situated between
Matter, magnetic storage medium (such as ROM, floppy disk, hard disk etc.) and optical storage media (such as CD-ROM or DVD);And transmission
Medium, such as internet transmissions medium.Therefore, medium can be include or carrying signal or information be specified that with it is measurable
Structure, such as carries the device of the bit stream according to one or more embodiments.Medium can also be distributed network so that with
Distributed storage/transmission and perform the computer-readable code.In addition, treatment element may include processor or computer disposal
Device, and multiple treatment elements can be distributed and/or included in single assembly.
It should be understood that embodiment described here be considered merely as it is descriptive without in order to limit purpose.It is each real
Other similar features or aspect suitable for other embodiments should be generally considered as by applying the description of the feature in example or aspect.
Although the various implementations with reference to the disclosure exemplify and describe the disclosure, those skilled in the art should manage
Solution, can be wherein in the case where not departing from the spirit and scope of the present disclosure being defined by the appended claims and the equivalents thereof
Make the various changes in terms of form and details.
Claims (15)
1. a kind of electronic device for being used to perform application, the electronic device include:
Output unit, is configured as sending the first image and the second image to display device, wherein, the first image is to be based on first
The form of user and generate, the second image is generated based on the form of second user;And
Control unit, is configured as:
Determine the position of the first user, and movement of the first user of content forecast to another location based on the application,
Based on the movement of the first user predicted, the possibility to conflict between the first user and second user is predicted, and
Based on the possibility predicted, control sends the warning message of instruction conflict by output unit to display device.
2. electronic device according to claim 1, wherein,
Control unit determines first area, wherein, first area includes the activity in certain area with the first user, and first uses
A part of attainable solstics at family;And
Control unit determines second area, wherein, second area includes the activity in certain area with second user, and second uses
A part of attainable solstics at family.
3. electronic device according to claim 2, wherein, if first area is mutually overlapping with second area, control unit
Determine there is a possibility that to conflict between the first user and second user.
4. electronic device according to claim 2, wherein, control unit predicts the mobile route and second user of the first user
Mobile route, and determine first by further contemplating the predicted mobile route of the first user and the mobile route of second user
Region and second area.
5. electronic device according to claim 4, wherein, the mobile route of the first user and the mobile route of second user are bases
It is predicted in the details of the application.
6. electronic device according to claim 1, wherein, the first image and the second image are included by the figure of external camera generation
Picture.
7. electronic device according to claim 1, wherein, the warning message include from the image of external display device output or
From the sound of external display device output.
8. electronic device according to claim 1, wherein, if it is determined that can there are what is conflicted between the first user and second user
Energy property, then control unit pause perform the application.
9. a kind of method for performing application, the described method includes:
The first image and the second image are sent to display device, wherein, the first image is the form based on the first user and generates
, the second image is generated based on the form of second user;
Determine the position of the first user, and movement of the first user of content forecast to another location based on the application;
Based on the movement of the first user predicted, the possibility to conflict between the first user and second user is predicted;And
Based on the possibility predicted, the warning message for indicating conflict is sent to display device.
10. method according to claim 9, further includes:
Determine first area, wherein, first area include with the first user in certain area it is movable, one of the first user
Divide attainable solstics;And
Determine second area, wherein, second area include with second user in certain area it is movable, one of second user
Divide attainable solstics.
11. method according to claim 10, further includes:If first area is mutually overlapping with second area, described in generation
Warning message.
12. method according to claim 10, further includes:Predict the mobile route of the first user,
Wherein it is determined that the step of first area, by further contemplating predicted mobile route including determining first area.
13. a kind of electronic device for being used to perform application, the electronic device include:
Output unit, is configured as sending the first image and the second image to display device, wherein, the first image is to be based on user
Form and generate, the second image is the form based at least one main body for being located proximate to the user and generates;With
And
Control unit, is configured as:
Determine the position of the first user, and movement of the first user of content forecast to another location based on the application,
Based on the movement of the first user predicted, the possibility to conflict between the user and main body is predicted, and
Based on the possibility predicted, control sends the warning message of instruction conflict by output unit to display device.
14. a kind of electronic device for being used to perform application, the electronic device include:
Output unit, is configured as sending the first image and the second image to display device, wherein, the first image is to be based on user
Form and generate, the second image is the form based at least one main body for being located proximate to the user and generates;With
And
Control unit, is configured as based on external input information to set hazardous area, and determine the user in control unit
During into hazardous area, control sends the police for indicating the possibility that the user enters hazardous area by output unit to display device
Accuse information.
15. a kind of electronic device for being used to perform application, the electronic device include:
Input unit, is configured as receiving the input of picture signal;And
Control unit, is configured as:
Control sets first area corresponding with the first object and second area corresponding with the second object, wherein, the first object
It is included in the second object in described image signal;
Measure the distance between first area and second area;
If the value of measured distance is less than predetermined value, predetermined message is exported.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810244879.2A CN108404402B (en) | 2014-03-21 | 2015-03-17 | Method and apparatus for preventing collision between subjects |
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2014-0033695 | 2014-03-21 | ||
KR20140033695 | 2014-03-21 | ||
KR1020140169178A KR20150110283A (en) | 2014-03-21 | 2014-11-28 | Method and apparatus for preventing a collision between objects |
KR10-2014-0169178 | 2014-11-28 | ||
KR10-2015-0018872 | 2015-02-06 | ||
KR1020150018872A KR102373462B1 (en) | 2014-03-21 | 2015-02-06 | Method and apparatus for preventing a collision between subjects |
PCT/KR2015/002547 WO2015142019A1 (en) | 2014-03-21 | 2015-03-17 | Method and apparatus for preventing a collision between subjects |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810244879.2A Division CN108404402B (en) | 2014-03-21 | 2015-03-17 | Method and apparatus for preventing collision between subjects |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105190487A CN105190487A (en) | 2015-12-23 |
CN105190487B true CN105190487B (en) | 2018-04-17 |
Family
ID=54341464
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201580000721.5A Active CN105190487B (en) | 2014-03-21 | 2015-03-17 | Method and apparatus for preventing from conflicting between main body |
CN201810244879.2A Active CN108404402B (en) | 2014-03-21 | 2015-03-17 | Method and apparatus for preventing collision between subjects |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810244879.2A Active CN108404402B (en) | 2014-03-21 | 2015-03-17 | Method and apparatus for preventing collision between subjects |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20150110283A (en) |
CN (2) | CN105190487B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017120915A1 (en) * | 2016-01-15 | 2017-07-20 | 邓娟 | Data collecting method of surrounding movement monitoring technique and head-mounted virtual reality device |
CN107233733B (en) * | 2017-05-11 | 2018-07-06 | 腾讯科技(深圳)有限公司 | The treating method and apparatus of target object |
US20190033989A1 (en) * | 2017-07-31 | 2019-01-31 | Google Inc. | Virtual reality environment boundaries using depth sensors |
JP6911730B2 (en) * | 2017-11-29 | 2021-07-28 | 京セラドキュメントソリューションズ株式会社 | Display device, image processing device, processing execution method, processing execution program |
CN108854066B (en) * | 2018-06-21 | 2024-03-12 | 腾讯科技(上海)有限公司 | Method, device, computer equipment and storage medium for processing behavior state in game |
KR102174695B1 (en) * | 2018-11-15 | 2020-11-05 | 송응열 | Apparatus and method for recognizing movement of object |
KR20240024471A (en) | 2022-08-17 | 2024-02-26 | 배재대학교 산학협력단 | Worker collision safety management system and method using object detection |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101380520A (en) * | 2007-09-05 | 2009-03-11 | 财团法人工业技术研究院 | Method for adjusting inertia sensing range and sensitivity and inertia sensing interaction device and system |
CN102008823A (en) * | 2009-04-26 | 2011-04-13 | 艾利维公司 | Method and system for controlling movements of objects in a videogame |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6884171B2 (en) * | 2000-09-18 | 2005-04-26 | Nintendo Co., Ltd. | Video game distribution network |
JP2005121531A (en) * | 2003-10-17 | 2005-05-12 | Navitime Japan Co Ltd | Portable navigation device, controlling method, and control program thereof |
US7489265B2 (en) * | 2005-01-13 | 2009-02-10 | Autoliv Asp, Inc. | Vehicle sensor system and process |
US20110199302A1 (en) * | 2010-02-16 | 2011-08-18 | Microsoft Corporation | Capturing screen objects using a collision volume |
CN102685382B (en) * | 2011-03-18 | 2016-01-20 | 安尼株式会社 | Image processing apparatus and method and moving body collision prevention device |
US9266019B2 (en) * | 2011-07-01 | 2016-02-23 | Empire Technology Development Llc | Safety scheme for gesture-based game |
US9081177B2 (en) * | 2011-10-07 | 2015-07-14 | Google Inc. | Wearable computer with nearby object response |
-
2014
- 2014-11-28 KR KR1020140169178A patent/KR20150110283A/en unknown
-
2015
- 2015-03-17 CN CN201580000721.5A patent/CN105190487B/en active Active
- 2015-03-17 CN CN201810244879.2A patent/CN108404402B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101380520A (en) * | 2007-09-05 | 2009-03-11 | 财团法人工业技术研究院 | Method for adjusting inertia sensing range and sensitivity and inertia sensing interaction device and system |
CN102008823A (en) * | 2009-04-26 | 2011-04-13 | 艾利维公司 | Method and system for controlling movements of objects in a videogame |
Also Published As
Publication number | Publication date |
---|---|
CN108404402B (en) | 2021-07-20 |
CN105190487A (en) | 2015-12-23 |
CN108404402A (en) | 2018-08-17 |
KR20150110283A (en) | 2015-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105190487B (en) | Method and apparatus for preventing from conflicting between main body | |
US10905944B2 (en) | Method and apparatus for preventing a collision between subjects | |
US11386629B2 (en) | Cross reality system | |
US11869158B2 (en) | Cross reality system with localization service and shared location-based content | |
JP7079231B2 (en) | Information processing equipment, information processing system, control method, program | |
US10656720B1 (en) | Mode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments | |
CN105190483B (en) | Detect the gesture performed at least two control objects | |
US9558592B2 (en) | Visualization of physical interactions in augmented reality | |
CN106255939A (en) | World's locking display quality feedback | |
JP6478360B2 (en) | Content browsing | |
JP6558839B2 (en) | Intermediary reality | |
US10600253B2 (en) | Information processing apparatus, information processing method, and program | |
CN106255943A (en) | Conversion between health locking augmented reality and world's locking augmented reality | |
CN110163976A (en) | A kind of method, apparatus, terminal device and the storage medium of virtual scene conversion | |
JP2023503257A (en) | Pathable world mapping and localization | |
CN105339867A (en) | Object display with visual verisimilitude | |
CN111295234A (en) | Method and system for generating detailed data sets of an environment via game play | |
CN108106605A (en) | Depth transducer control based on context | |
CN107850990A (en) | Shared mediation real content | |
CN107407959A (en) | The manipulation of 3-D view based on posture | |
CN112927260B (en) | Pose generation method and device, computer equipment and storage medium | |
CN114026606A (en) | Fast hand meshing for dynamic occlusion | |
CN106569605A (en) | Virtual reality-based control method and device | |
Hoggenmueller et al. | Enhancing pedestrian safety through in-situ projections: a hyperreal design approach | |
KR101710198B1 (en) | Method for Displaying Hologram Object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |