CN105745917A - Interactive augmented reality using a self-propelled device - Google Patents

Interactive augmented reality using a self-propelled device Download PDF

Info

Publication number
CN105745917A
CN105745917A CN201480062743.XA CN201480062743A CN105745917A CN 105745917 A CN105745917 A CN 105745917A CN 201480062743 A CN201480062743 A CN 201480062743A CN 105745917 A CN105745917 A CN 105745917A
Authority
CN
China
Prior art keywords
virtual environment
image
computing equipment
computing device
mobile computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201480062743.XA
Other languages
Chinese (zh)
Inventor
F·波洛
J·卡罗尔
S·卡斯塔特-史密斯
R·英格拉姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sphero Inc
Original Assignee
Orbotix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/054,636 external-priority patent/US9827487B2/en
Application filed by Orbotix Inc filed Critical Orbotix Inc
Publication of CN105745917A publication Critical patent/CN105745917A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/32Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using local area network [LAN] connections
    • A63F13/327Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using local area network [LAN] connections using wireless networks, e.g. Wi-Fi® or piconet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • A63F13/332Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using wireless networks, e.g. cellular phone networks
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method is disclosed for operating a mobile computing device. The method may include a communication link between the mobile computing device and a second computing device. The second computing device may provide a virtual environment for the mobile computing device. Furthermore, the mobile computing device may allow a user to control a self-propelled device, which may be rendered as a virtual entity upon the virtual environment.

Description

Use the interactive augment reality of self-advancing device
Background of invention
Along with the improvement of mobile computing device, their equipment can be used for various different purposes by user.User is possible not only to operation smart phone to carry out telephone relation and to browse the Internet, and user such as can also use its smart phone to perform various different task.
Accompanying drawing explanation
The disclosure herein is illustrate in the accompanying drawings, rather than restrictive, and wherein identical label refers to similar element, in the drawings:
Fig. 1 show according to an embodiment, for operating the example system of computing equipment;
Fig. 2 be shown as an embodiment, for operating the illustrative methods of computing equipment;
Fig. 3 A-3B shows the example of the processed image according to an embodiment;
Fig. 4 show according to an embodiment, for operating the example hardware figure of system of computing equipment;
Fig. 5 shows the illustrative methods using the mobile computing device being linked to the second computing equipment to control the self-advancing device as augmented reality entity;
Fig. 6 A-6B display uses the mobile computing device being linked to the second computing equipment to control the example of the self-advancing device as augmented reality entity.
Detailed description of the invention
The embodiments described herein provides a kind of computing equipment, the circular object that this computing equipment can detect one or more circular objects in image (such as, ball, have the self-advancing device of spherical housing) and tracing detection arrives.Computing equipment can utilize the circular object detected as input, for performing one or more operation or process on the computing device.
In some embodiments, it is possible to receive the one or more images including real-time video frame from the image acquiring device of computing equipment.Computing equipment can run one or more application program, or runs under the one or more patterns using image acquiring device parts, in order to receives vision input.Vision input can be the sight that lens focused on or pointed to and/or the object in this sight of image acquiring device.Such as, sight can include moving and having round-shaped object interested.
Embodiment provides the computing equipment receiving multiple images, in order to detect the one or more circular objects (corresponding to one or more objects interested) in one or more image.Such as, the circular object described in image can with there is at least one circular or part-circular, such as oval, avette, disk, ball etc., shell or object interested corresponding to structure.Such as, object interested can correspond to include the ball of (the vision input such as, image acquiring device detected), circular object, cylindrical object or the self-advancing device with spherical housing etc. in sight.In some example, self-advancing device can be corrected (such as, assembling afterwards), in order to includes circle or spherical design (such as, circular object is attached to self-advancing device, or throws in a table tennis in the wagon box of telecar).Computing equipment can process and utilize the object in the image detected as the input performing one or more operation or process on the computing device.
In certain embodiments, each image received can be treated separately, in order to detects one or more circular object.Computing equipment can use one or more detection techniques together or individually, in order to detection circular object.According to one or more embodiments, detection technique can include using the picture filter based on circular object size and detection algorithm.It addition, detection technique can be used to determine the positional information of one or more circular object according to one or more circular objects relative position in one or more pictures.Circular object in detection image can make computing equipment can follow the tracks of the motion of circular object, and the speed of motion and/or acceleration.
After one or more circular objects in the image that detection receives, computing equipment can utilize the one or more circular objects detected and respective positional information as the input for performing additional operations or process.In one embodiment, computing equipment can adjust the image of circular object including detecting, and presents adjusted image on the display device.In other embodiments, computing equipment can use the circular object detected as controlling the input (such as, as remote equipment) of object detected.
According to one or more embodiments, image acquisition equipment can be different from the computing equipment of the one or more circular objects detected in one or more images and separate.Image acquisition equipment and computing equipment can radio communication mutually so that computing equipment is able to receive that the one or more images from image acquisition equipment.The recording equipment of such as video capture device can also separate with computing equipment, and radio communication mutual with computing equipment.In other embodiments, described equipment can be a part for an equipment, or may be incorporated in together as an equipment.
The embodiments described herein also provides for the operation and/or the process that are performed by will be performed, recording equipment and/or image acquisition equipment and/or computing equipment at different time, in different order (such as, displacement on the time).
One or more embodiments described herein specify, computing equipment the method, technology and the action that perform programmatically are performed, or as computer-implemented method.As used herein, programmatically refer to by using code, or the executable instruction of computer.These instructions can be stored in one or more memory resources of computing equipment.The step programmatically performed can be automatization, or Bu Shi automatization.
One or more embodiment described herein can be implemented by the programming module of use system or assembly.Programming module or assembly can include program, subroutine, the part of program or be able to carry out component software or the nextport hardware component NextPort of one or more described task or function.As used herein, module or assembly may reside in on the hardware component of other module or assembly independence.Alternatively, module or component can be shared cell or the processing procedures of other module, program or machine.
Embodiments more described herein would generally require to use computing equipment, including processing and memory resource.Such as, one or more embodiment described herein wholly or partly can be implemented on computing equipment as such as digital camera, digital camera, desk computer, honeycomb or smart phone, personal digital assistant (PDS), notebook computer, printer, digital frame and tablet device.Storage, process and Internet resources can with the foundation of any embodiment described herein, use or perform to combine and use (including performing or the enforcement of any system in conjunction with any method).
And, one or more embodiments described herein can by using the instruction that can be performed by one or more processors to be implemented.These instructions can carried on a computer-readable medium.The machine offer shown below by accompanying drawing or describe processes the example of resource and computer-readable medium, and the instruction for implementing the present invention can carried and/or execution thereon.Specifically, many machines that the embodiment of the present invention shows include processor and for preserving the various forms of memorizeies of data and instruction.The example of computer-readable medium includes permanent memory storage device, such as the hard drive on personal computer or server.Other example of computer-readable storage medium includes portable storage unit, such as CD or DVD unit, flash memory (being such as carried on smart phone, multifunctional equipment or tablet PC), and magnetic storage.Computer, terminal, the equipment (such as, the mobile equipment of such as cell phone) that can surf the Net are all utilize processor, memorizer and the example of the machine and equipment of instruction being stored on computer-readable medium.It addition, embodiment maybe can carry the computer of this class method and can be implemented by the form of bearing medium by computer program.
System description
Fig. 1 show according to an embodiment, for operating the example system of computing equipment.Such as can implement the mobile multifunctional computing equipment (such as, smart phone, tablet device) with integrated image acquisition component is upper about the system described by Fig. 1.In variation, system 100 can be implemented on notepad, notebook computer or other computing equipment, and they can be controlled or operate and follow the tracks of in the environment of mobile object and run by photographing unit wherein.
In the example in fig 1, operating system 100 processes image input, in order to dynamically detect circular object, such as the object in motion.The circular object detected can be a part of shell of the equipment in motion and/or under computing equipment control.According to some described example, as a part for programmed process process, circular object is detected, and the equipment wherein combined with circular object is controlled in motion or operation.In variation, circular object is detected at the volley, and as a part for programmed process process, wherein the existence of object is used to generate other programmed process, such as utilizes the circular object in motion as the augmented reality of input.Therefore, system 100 can detect with move in the corresponding circular object of object interested.The detection of this type objects can provide input so that computing equipment is able to carry out other operation, for instance, the object of Perceived control interest, or the expression of object interested is merged on computing equipment in the augmented reality of display.
Furthermore, system 100 can perform to describe the dimensional analysis of multiple images of the sight including object interested.Specifically, system 100 can perform dimensional analysis, in order to determines the distance of object interested and image acquisition equipment and/or computing equipment.For this, in one example, circular object can effectively be detected by the assembly of system 100 in the picture and process.
In one embodiment, system 100 includes object detection 110, Image Adjusting 130, user interface (UI) assembly 140, equipment control 150 and radio communication (WCOM) 160.The assembly combination of system 100, to receive from multiple images of image acquisition equipment and to automatically process image, in order to the one or more circular objects described in detection image.Each image can use one or more technical finesses, thus the object detected can be processed, as being used for performing on the computing device the input of one or more operation.
According to an embodiment, object detection 110 can also include sub-component, such as picture filter 112, gradient detection 120 and image tagged 122.These assemblies can be combined so that object detection 110 can detect and follow the tracks of the one or more objects detected in multiple images.
Computing equipment can run one or more application program and/or run under one or more different modes.In one embodiment, can perform in response to user or start application or program and operating system 100, described application or program use the vision detected by image acquisition equipment input to perform one or more process (such as, game application or equipment Alignment arrange program).Object detection 110 can receive vision input 114 from image acquisition equipment, for instance image inputs, in order to detect the one or more circular objects in one or more image.Such as, image acquisition equipment (such as, integrated photographing unit) can receive and obtain sight (visual angle such as, aimed at from lens and/or object).Vision input 114 can a series of images form or by video input (the multiple images such as, continuously acquired with 30-60 frame per second).
In some embodiments it is possible to provide the preview of the image received by computing equipment on the display device of computing equipment.Such as, vision input 114 can also be supplied to UI assembly 140, so that UI assembly 140 generate the vision input 114 received preview image (and one or more features that can present by preview image, for instance, zoom zooms in or out feature, acquisition characteristics of image).Display device (such as, touch-sensitive display device) can present sight, the dynamically change real time imaging that image acquisition equipment is currently pointed at.This image may be included in the object one or more interested in sight with circular feature (such as, having a part for circular housing or shell).User can also pass through to press acquisition button or trigger, or by using other user interface feature, (such as, " acquisition image " graphic feature provided on touch-sensitive display is provided), obtain and store one or more images of sight.
In some embodiments, object detection 110 can process each image individually, in order to detects object interested.Specifically, object interested can be designated as and specific shape, such as hemispherical or spherical (or other spherical part), matches.For inputting each images of 114 receptions via vision, object detection 110 can operation image identification software and/or other image processing method, in order to detect the circular specific characteristic of object interested.Such as, object detection 110 can scan the pixel of image corresponding with spherical, hemispherical or other variation (depending on the circular feature specified), that individually obtain.
Object detection 110 can use different detection techniques, in order to collectively or individually detects the one or more circular objects in single image.In one embodiment, the picture filter 112 of object detection 110 can receive one or more image, and the image of each reception is used filter, such as gray scale filter.Image is used gray scale filter, it is possible to each pixel of image is converted to gray scale (such as, according to strength information).Picture filter 112 can be supplied to the gray level image 116 of each image received gray scale detection 120 and image tagged 122.Once represent with gray level image, trained object detector can scan the gray-scale pixels of circular object corresponding to potential and interested object.The use of gray scale filter promotes rapid image object detection, in order to can monitor this object in real time when object of which movement interested.
In certain embodiments, object interested can include supplementary features, in order to the tracking of each of which.Such as, the circular feature of object interested can also combine with supplementary features;Such as, in other structure visual terrestrial reference, the brightness of color (such as, white, silver color, yellow etc.), illumination or picture on surface.As an example, object interested can be coated with bright color (such as, white), uses gray scale filter can produce the object than the other parts of same sight with more shallow gray shade on processed input picture.In one embodiment, gray level image 116 is provided to gradient detection 120 and image tagged 122, and they use one or more image processing methods (such as, applying one or more algorithm) to detect the circular object in each gray level image 116.
Furthermore, in variation, known (or predetermined) information about object interested can use when performing object detection.Such as, user can provide the input 126 corresponding with the visable indicia of object interested.Such as, this input 126 can include the color of the estimation size (such as, radius or diameter) of circular object and the object inputted via input mechanism (such as, using one or more button or touch-sensitive display panel).In another variation, user can provide the input 126 corresponding with circular gesture on touch-sensitive display panel, in order to represents the approx. dimension presenting object over the display.In other example, the information about one or more circular objects can be stored in the memorizer of computing equipment.Gradient detection 120 can use known information, is used for the circular object detecting in each image.
In one embodiment, gradient detection 120 can process the single pixel (or some pixel) of gray level image 116, in order to determines the respective gradient of single pixel.The gradient of specific pixel can based on the pixel of surrounding, including next-door neighbour pixel.Gradient is corresponding to a vector, and wherein the luminance level of pixel value increases on the direction of this vector.Such as, the luminance level of pixel is below or less than the luminance level of other pixel on gradient direction.Image tagged 122 implements a kind of logic, and this logic be the determination ladder marker of each pixel point within the distance equal with circular object radius (such as, the radius of reality or radius of curvature) on gradient direction.Such as, the user's circular object to be detected of instruction that (can input 126 via user) has radius or the approximate pixel length of certain size.Such as, approximate length in pixels may be configured as 20 pixels.In other example, approximate length in pixels can according to previously stored, assume about the information of Circle in Digital Images shape dimension of object or pre-determine.For each pixel, by using the gradient determined, image tagged 122 can mark the point in 20 length in pixels of that specific pixel on the gradient direction with maximum brightness level.This center that can represent circular object or intermediate point.According to multiple labellings (such as, the labelling of each pixel for gray level image 116), object detection 110 assume that the image-region with multiple labelling is corresponding to the center of circular object, and may determine that circular object location in each gray level image 116 or position (if having circular object to exist).The positional information of the determination of circular object can relative to image.
Gradient detection 120 can also when determining gradient for each pixel in gray level image 116, access parameter (they can be stored in the list in data base or in the storage resource of computing equipment).Parameter such as can include luminance threshold and Grads threshold.If neighbor (in the radius distance of specific pixel, away from the pixel on the direction of specific pixel) does not have the pattern that display brightness increases, luminance threshold instruction gradient detection 120 can ignore the gradient calculating this specific pixel.In another example, if neighbor is not having sufficiently strong change away from display brightness level on the direction of specific pixel, Grads threshold instruction gradient detection 120 can ignore the gradient calculating this specific pixel.For the not calculated pixel of wherein gradient, image tagged 122 will not provide labelling.
In another embodiment, object detection 110 can apply the different radii of circular object, to detect the circular object in gray level image 116.Because gradient detection 120 and image tagged 122 can process each single pixel in gray level image 116, object detection 110 is assumed, for each gray level image 116, the radius being used to detection circular object can be different from another image-region at an image-region.Owing to image acquiring device is different with position relative to the angle of circular object, orientation, distance, the size of circular object can change from the how far of user with it.Object detection 110 can receive the information of the orientation about image acquisition equipment and position (such as, the lens alignment of image acquisition equipment or the place focused on), to determine whether the given pixel in image represents the ground closer to user or user's foot or floor, or whether pixel represents the point (such as, closer to horizontal line) further from user.If object detection 110 is it is also supposed that specified point in image represents the region closer to user, then with circular object further from compared with the situation of user, at this place or generally bigger in the picture near the circular object of this point.As a result, for representing the point in region closer to user, image tagged 122 is when along the gradient direction marked region of this some, it is possible to apply bigger radius (as it is assumed that the orange to be detected is bigger in the picture).
For representing the point in region further from user, typically look less in the picture at this place or close to the circular object of this point.As a result, determining that this given point represents behind the region of user, image tagged 122 can apply less radius when marking a region (such as, ten length in pixels, rather than 20 length in pixels) on the gradient direction along this point.In this way, object detection 110 can also determine whether circular object just moves in one or more directions, because the size of circular object can become greatly (such as to another image by an image from image sequence, if it moves near user), or (such as, if it moves) is diminished away from user from an image to another image.
Such as, other sensing mechanism according to accelerometer and/or computing equipment, computing equipment can detect its position relative to ground.It addition, computing equipment may determine that ground level and the horizontal line of the image provided by image acquisition equipment.According to determined information, for instance, if user holds computing equipment just in one way, so that the lens of image acquisition equipment aim at the region closer to her, then computing equipment can increase the radius (such as, size) of detected circular object.On the other hand, if user holds computing equipment just in one way, so that the lens of image acquisition equipment aim at the region (such as, from the distance that near-end is farther) further from user, then the radius of detected circular object can be reduced.
As discussed, for each gradient (for each pixel) determined, image tagged 122 can mark a point in the distance equal to circular object radius, on gradient direction (such as, a position in image).In this way, the circular object being detected without user's instruction has certain radius, and image tagged 122 can mark a point on the gradient direction with maximum brightness level, in varied radius.Radius can aim at closer to the lens of the image acquisition equipment of computing equipment or change further from the region of user.
In certain embodiments, the estimation size of one or more circular objects can be automatically configured and/or be stored in the storage resource of computing equipment.Such as, circular object can be that have a circular housing (such as, spherical housing) and user computing equipment pairing or the self-advancing device of the computing equipment wireless connections with user.The information of the self-advancing device of relevant user is stored in the storage resource of computing equipment.
Object detection 110 user can also input 126 and apply different types of filter.In certain embodiments, the color according to the circular object interested detected by object detection 110, picture filter 112 can apply other type of filter.Such as, if the color of circular object interested is dark, then user can provide the input 126 of the color of instruction ball.
Object detection 110 can detect one or more circular objects in one or more pictures, and also provides the information corresponding with the object detected.The color of object, size, shape (such as, oval or spherical), object can be included relative to the position etc. of image about the information of the object detected.In one embodiment, can be stored in the storage resource of computing equipment about the information of the circular object detected.
According to one or more embodiments, system 100 utilizes from image, one or more objects (including the positional information of the one or more object) of detecting as the input performing one or more operation or process on the computing device.In the example system of Fig. 1, object detection 110 can be supplied to Image Adjusting 130 and equipment control 150 information and the image (such as, object detection image 124) about the object detected.Such as, Image Adjusting 130 can run as a part for game application or together with game application.
In certain embodiments, according to object detection image 124, system 100 can determine that circular object interested moves.Such as, corresponding to different time phases (such as, order), two or more images can include the circular object that the object interested with sight is corresponding.In first image in two or more images, circular object can be detected and be in the first position relative to this image.In next image in the sequence, circular object can be detected and be in the second position relative to this image.As the result of the position change of the circular object detected in different images, equipment control 150 and/or Image Adjusting 130 may determine that this circular object is in motion (or moving).In other example, circular object interested may be at resting position, but just rotates in original place.The circular object rotated can be detected in the picture.
Furthermore, the circular object detected from image input can be converted into the coordinate data relevant with image description for giving reference frame.This coordinate reference frame can be switched in real-world coordinates system, in order to observes and/or predicts the information relevant with the position of object interested.If the dimensional analysis that object can be described by object from the degree of depth of image acquisition equipment is determined, it is possible to map the position of object interested from the image of object describing.Except positional information, when being mapped to the reference frame of real-world objects that determine, interested, it is possible to the reference frame described by the image of object determines the speed of object interested.
In another example, the object interested with circular feature can be placed on the object top with known altitude, rather than (such as, at the top at the top of coffee table or remote-control toy truck) on ground or floor.By using the information of the object interested about known (or predetermined) and known height, object detection 110 may determine that the more accurate position of objects in images.Additionally, according to information (all information as is known and height) and reference frame, Image Adjusting 130 can provide the image of that to present on the display apparatus, yardstick accurately (position relative to the exact position of object interested and image acquisition equipment).
Such as, Image Adjusting 130 can process object detection image 124, in order to dynamically adjusts at least some of of image.Image Adjusting 130 can also access one group of rule for mapping point reference frame and real-world coordinates system, thus the adjustment image presented can be that yardstick is accurate relative to real-world scenarios.Rule can correspond to real-world parameters.In some example, it is undesirable that the accurate reproduction of yardstick, real-world parameters such as can input change by user.
Image Adjusting 130 can also be stored in adjusted image 132 memory resource of computing equipment.This allows users to access memory resource, in order to time viewing any storage, adjusted image 132 afterwards.In other embodiments, system 100 can carry out the additional treatments of image, is used for performing other operation.Image Adjusting 130 such as can access the image of previously processed mistake and image is performed additional adjustment.In some variation, object detection 110 and/or Image Adjusting 130 process performed can perform on that store rather than " live " or real-time image.It is therefoie, for example, optionally temporarily can be acquired relative to image by the process of object detection 110 and/or Image Adjusting 130 description and/or store shift.
In other example, Image Adjusting 130 can with graph image (such as, some other objects that character, animation are different from ball) dynamically overlapping or replace the circular object detected in image with this graph image, or change the image (such as, changing color or the distortion circular object of circular object) of the circular object detected itself in real time.Adjusted image 132 is provided to user interface components, so that the user of computing equipment can see that what include adjusted detection object presents content on the display device.User interface components 140 can also generate one or more feature, and these features can present on a display device of a computing device.As described, in some example, content that be presented, adjusted can be accurate relative to the actual position of object interested and the position of image acquisition equipment on yardstick.
Such as, user can watch live sports motion event (such as, child plays football in park) and the lens aiming movement field of its computing equipment.The vision input 114 received can include the multiple picture frames detected by image acquisition equipment.Object detection 110 can at each image (such as, because ball moves, ball position in each image can be different) the detection circular object (corresponding with object interested, for instance football) of image sequence.Image Adjusting 130 by being highlighted ball or can be provided about the ring of light of ball and provide image to process the object (such as, the football of detection) of detection to user interface components 140.Then user can see adjusted image 132 (with any supplementary features) on a display device of a computing device.
In other embodiments, equipment control 150 can process object detection image 124, in order to provides one or more controls for remote equipment.Such as, the circular object detected can be that the computing equipment with user matches or the self-advancing device of computing equipment wireless connections with user.User can see that the circular object (such as, the self-advancing device of user) including detecting inputs the representative of 114 in vision interior, that receive on a display device of a computing device.Then user can be supplied to equipment and control 150 and interact with the Explicit Expression of self-advancing device by user is inputted 152 (such as, the touch inputs on touch-sensitive display panel) on touch-sensitive display panel.
Equipment controls 150 reception object detection images 124 and user inputs 152, to generate which type of control information 154 to determine.According to the reception information corresponding with the circular object detected via object detection image 124, control information 154 can be determined by detection user's input on the ad-hoc location of display screen, such as, it is determined that the position of the described position circular object corresponding to detecting on image.Such as, user can rap and make it rotate on the Explicit Expression of self-advancing device, or raps and draw its finger in one direction and make self-advancing device move accordingly.Control information 154 is provided on WCOM160, thus information is provided to remote control unit.
Such as, the circular object detected position (such as, the circular object of detection is relative to the position of image) in object detection image 124 is determined in equipment control 150.When user provides input or performs gesture (such as in the expression of self-advancing device, touch-sensitive display panel is dragged to the right side expression) time, equipment controls 150 and may determine that gesture is relative to benchmark (such as, the initial position of the Explicit Expression of self-advancing device) path, and gesture is converted to control signal, for moving self-advancing device in an identical manner.Control signal can be provided that self-advancing device so that self-advancing device moves accordingly.
WCOM160 is used for the swapping data in system 100 with other external equipment (such as, the self-advancing device of such as remote equipment or user).In one embodiment, WCOM160 can implement various different agreement, such as Bluetooth communication protocol, Wi-Fi communication protocol, infrared communication agreement etc..Control information 154 can be wirelessly communicated to remote equipment (such as, self-advancing device) so that self-advancing device performs the action specified corresponding with user command.
In certain embodiments, equipment controls 150 can provide control information 154, for calibrating one or more parts (such as, for one or more assemblies of calibration calculations equipment) of remote equipment.Such as, circular object (such as, the self-advancing device of user) interested can have one or more gyroscope.Elapsing over time, due to gyroscopic drift, gyroscope is likely to need to recalibrate.By using object detection image 124, equipment controls 150 such as can detect whether self-advancing device moves, or whether it is static, and provides the control information 154 (it includes calibration information) being sent to self-advancing device via WCOM160.According to the moving line and the object detection image 124 that detect self-advancing device, equipment controls 150 gyroscopes that can calibrate self-advancing device.In one embodiment, equipment controls 150 can also receive device information 162 from self-advancing device, and it includes the current information (including the real time information about gyroscope) of the assembly about self-advancing device.Equipment controls 150 gyroscopes that can calibrate self-advancing device by this information.So, system 100 can be used to eliminate gyroscopic drift.
Similarly, one or more gyroscopes of computing equipment can also pass through with the circular object calibration of detection.Such as, when circular object is determined as static, equipment control 150 can by use detection circular object as a reference point come calibration calculations equipment gyroscope.
Object detection 110 can also be supplied to object detection image 118 other assembly of computing equipment or system and/or remote equipment (not shown in figure 1), for additional treatments.Other assembly can process the circular object detected, as being used for performing the input of other operation.
According to one or more embodiments, also provide the vision input 114 received by system 100 from the image acquisition equipment away from computing equipment.Such as, image acquisition equipment can separate with computing equipment.Image acquisition equipment can detect and/or obtain sight, and vision is inputted 114 be sent to computing equipment (such as, use cable or wirelessly).Such as, when inputting 114 from image acquisition equipment wireless receiving vision, system 100 can receive vision input 114 via WCOM160.Then system 100 can detect one or more circular objects from the image that far-end image acquisition equipment receives, and performs additional operations according to the detection object processed.
In another embodiment, system 100 can receive image (such as, from the frame of real-time video), then one section of persistent period post processing image (such as, detect one or more circular object and/or adjust image).Such as, retrieve or receive vision input for before processing in object detection 110, image acquisition equipment the vision input 114 provided is initially stored in the memory resource of computing equipment.Object detection 110 can input 126 in response to user and perform process (so that vision input 114 from memory resource retrieval or is received by object detection 110).In other embodiments, even if after image is dynamically processed, system 100 can also perform additional treatments.Such as, processing object detection image 124 in Image Adjusting 130 first time, Image Adjusting 130 can perform additional treatments, and different adjustment images is supplied to user interface components.So, user can watch different adjustment images 132 (such as, with the adjustment image of different graphic image covering) in the different time.
In certain embodiments, some assembly described within system 100 can the part as single component or as same assembly provide.Such as, object detection 110 and Image Adjusting 130 can as the part offers of same assembly.In another example, object detection 110 and equipment control 150 and can be provided as a part for same assembly.In other embodiments, the assembly described within system 100 can the part (such as, a part for camera application, Remote Device Control application program or game application) as a part for device operating system or as one or more application programs be provided.Logic can be used camera application (such as, software) and/or implement with the hardware of image acquisition equipment.
In some example, object interested alternatively has the shape that other is predetermined, and it can carry out yardstick and graphical analysis, in order to detects object in one or more pictures.Such as, system 100 can utilize other object detection and/or image processing techniques to detect to have the shape different from circle, such as it is trained to detect the grader of the object (such as, there is rectangular shape or triangular shaped) with specific two-dimensional shapes.Object detection 110 can detect the shape that other is predetermined, is used for the dimensional information of the image about this object to convert the space in real world and/or positional information to.
Additionally or alternatively, example described herein can also be used to adjacent objects analysis.In adjacent objects is analyzed, the information about the object adjacent with object interested can be determined according to its known spatial relationship with object interested.Such as, object interested can be tank, and wherein the position of tank can be determined in the picture.On tank, the rifle of equipment can also be tracked by use adjacent objects analysis.
Furthermore, multiple objects interested (such as, have circular or spherical) can be simultaneously tracked in one or more images of given sight.Object interested can move or static.Object detection 110 can detect multiple object interested and the positional information of each tracked object.Each tracked object may be used to determine relative to the positional information of other tracked object and is in other middle object interested of object interested or point.Such as, multiple spherical objects can be detected in the picture and follow the tracks of, in order to determine a cloud, for determining the position being likely to other object near one or more tracked objects.
Method
Fig. 2 be shown as the present invention, for operating the illustrative methods of computing equipment.The such as method described by the embodiment of Fig. 2 can be passed through to use the assembly such as described in conjunction with the embodiment of Fig. 1 to implement.Therefore, the element with reference to Fig. 1 illustrates that the suitable element for performing described step or sub-step or assembly.
Computing equipment can receive multiple images (step 200) from its image acquisition equipment.Multiple images are corresponding to the vision input of the sight that lens aimed at or focused on of image acquisition equipment.In certain embodiments, image input can correspond to be provided the real-time video of (sub-step 202) by image acquisition equipment.Real-time video can include multiple frame (wherein each frame can be an image), and the plurality of frame is detection per second and/or acquisition (such as, 30 frames per second).
Each image can be individually processed, in order to the one or more circular objects (step 210) in detection image.Computing equipment can operation image identification software and/or other image processing method, in order to detect the circular object in each image.Circular object in image can correspond to exist in sight and have the object interested of circular housing characteristic.In one embodiment, circular object can by applying for determining that the Given information of object interested is detected (sub-step 212).Known information can be provided by user and maybe can be pre-configured with and/or be stored in the memory resource of computing equipment.
According to one or more embodiments, it is known that information can include the size (sub-step 214) of circular object interested.The size of circular object can correspond to the radius (or such as, oval radius) of object or the diameter of object, or in some cases, the relative size compared with the screen size of display device.By using the dimension information about circular object, the radius of such as circular object or diameter, it is possible to detect one or more circular object in each image.Additionally, it is known that information can include the information (sub-step 216) corresponding relative to the position on ground with computing equipment.According to orientation, direction and position that user aims at the lens of computing equipment, circular object can be confirmed as being little (relative to screen size) dimensionally, or is big dimensionally.It that case, the radius of circular object or diameter can dynamically change dimensionally.Such as, by applying gray scale filter, such as Fig. 1 discussing, circular object can be detected in each image.Alternatively, it is also possible to determine the information of circular object, such as circular object is relative to the positional information of image.
By detecting the one or more circular objects in image, computing equipment can utilize the positional information of the object of detection and the object of detection as the input (step 220) for performing one or more operation or process on the computing device.Such as, computing equipment can pass through to generate covering, or by replacing the circular object detected in image with the graphic feature of the motion of the circular object in analog image, and adjust image (sub-step 222).Such figure adjustment is probably useful for using real-world scenarios as the game application of a part for game.
In one embodiment, covering can also be the graphic feature with three-dimensional character.Graphic feature can be presented as a part for display image, thus it is consistent with the three-dimensional character of the circular object detected and motion.Such as, if circular object rotation turnback, then the feature that the graphic feature covered can also be similar rotates.
Computing equipment can also process the object input (sub-step 224) as the circular object (such as, self-advancing device) detected for controlled in wireless of detection.Such as, user can manipulate the image of the expression comprising self-advancing device on the display device, in order to controls actual self-advancing device (object such as, interested) and moves with certain speed in a certain direction.User can also rap the target location on touch-sensitive display panel, so that self-advancing device moves to target location.In another embodiment, the circular object of detection is used as the input (sub-step 226) for the circular object projection according to the real life object near user and real-life or display image.The circular object of detection can also be used to one or more assemblies (sub-step 228) of the circular object of calibration detection.
The object processing detection may be included in tracking ball in ball game as other application inputted, or uses ball and/or disk as mark or fixing reference point (such as, reference mark).Disk such as can be used to reference point or measurement point that other object is placed in image.Map or navigation it addition, other application includes that the object of detection is used for space.Point, object space, guidance path point etc. can give virtual point, and they are corresponding to the particular location within long-time period.Computing equipment can be supplied to circular object the location of circular object and the information of position about detection, so that circular object (such as self-advancing device) can measure the process that itself advances towards target location, and itself is directed to requires position.In other applications, the circular object according to detection, it is known that the absolute position of the object of height can be passed through to select the position of (such as, tapping display curtain) object to calculate from image.
Fig. 3 A-3B shows the example of the processed image according to an embodiment.The exemplary user interface feature provided on the display device, the embodiment of such as Fig. 3 A-3B describes, it is possible to by using the assembly such as described in conjunction with the embodiment of Fig. 1 to implement.Therefore, the element with reference to Fig. 1 illustrates that the suitable element for performing described step or sub-step or assembly.
Fig. 3 A shows the image of the sight of the expression including circle or spherical object (such as, ball).Circular object can move.But, because Fig. 3 A shows the single image of image sequence, for instance, circular object is depicted as static.Computing equipment can provide the preview of such image object as just being detected by the image acquisition equipment of computing equipment and/or obtaining on the display device.
After computing equipment detects location or the position of circular object and circular object in multiple images, the circular object detected is used as performing the input of one or more additional operations.In one embodiment, it is possible to adjust image by visually changing the circular object of detection.Such as, user can operate computing equipment, in order to plays game (such as, operation game application).In figure 3b, the image including the circular object of detection can pass through to cover or replace the circular object of described detection to be adjusted with another graph image (such as, dragon), as a part for game.Because spherical object moves in real time in real world, the image sequence therefore received also is described ball and is moved.
Each image can be processed, and is used for detecting spherical object, and the spherical object detected can be processed as input, thus dragon shown on a display device of a computing device is also moved accordingly.In some variation, the image presented can be dynamically adjusted.Such as, the graph image of dragon can in response to triggering, the input of such as user, or object interested moves to specific position or another object interested adjacent, and is dynamically changed into the graph image of Eremiatis argi.
Additionally or alternatively, computing equipment, such as implement the equipment of the system 100 of Fig. 1, it is possible to detect the multiple circular objects in image and follow the tracks of position and/or the motion of multiple circular object, and the object of another computing equipment Perceived control interest.In certain embodiments, image acquisition equipment and/or process image can separate or different from the computing equipment of the motion controlling object interested as such as circular or spherical self-advancing device with the computing equipment detecting one or more circular object.Furthermore, in a variation, the content (such as, graphic overlay) presented according to the object of detection on the computing device and tracking, it is possible to dynamically change according to the one or more triggerings provided by another computing equipment or input.
Such as, multiple users can be combined with each other in game environment, wherein first user uses the first computing equipment to carry out the motion of object of Perceived control interest, simultaneously the second user and the 3rd usertracking object interested and content is presented on their respective computing equipment.Fourth user can use the 4th computing equipment to control those contents (such as, dynamically adjust replace the object that detects and those shown graph images) of just display on the equipment of the second user and the 3rd user.Such as, in one embodiment, the second user can follow the tracks of that object interested is different from the content presented on the computing equipment of the 3rd user with viewing presents content (such as, control which content according to fourth user and be shown to each user).Computing equipment can via wireless communication protocol, such as bluetooth or Wi-Fi, and intercommunication.
Using in example at another, the multiple users with respective computing equipment can detect and follow the tracks of ten objects interested, including one or more self-advancing devices.The control of one or more objects interested can be transmitted between users.
Hardware chart
Fig. 4 shows the example hardware figure of the computer system that can implement the embodiments described herein on it.Such as, in the context of Fig. 1, it is possible to by using the computer system that such as Fig. 4 describes to implement system 100.In one embodiment, computing equipment 400 can correspond to mobile computing device, all if carrying out the cellular devices of telephone relation, message transmission and data service.The example of this kind equipment includes smart phone, mobile phone or for the tablet device of honeycomb substrate, digital camera or notebook computer and desk computer (such as, PC).Computing equipment 400 includes processing resource (such as, one or more processors) 410, memory resource 420, display device 430, one or more communication subsystem 440 (including radio communication subsystem), input mechanism 450 and camera assembly 460.Display device 430 can be touch-sensitive display panel, and it can also receive the input from user.In one embodiment, at least one communication subsystem 440 is sent by data channel and voice channel and receives cellular data.
Process resource 410 and be configured to that there is software and/or other logic, in order to perform the embodiment by such as Fig. 1-3 description and one or more process in the application, step and other function.Process resource 410 and be configured to together with the instruction and data being stored in memory resource 420 to implement system 100 (as described in conjunction with Fig. 1).Such as, the instruction being used for implementing object detection (including picture filter, gradient detection, logos), Image Adjusting, user interface components and equipment control can be stored in the memory resource 420 of computing equipment 400.
Process resource 410 can perform for operating object detection and for receiving the instruction of the image (inputting 462 via vision) obtained by the lens of image acquisition equipment and/or other assembly 460.After one or more circular objects in detecting one or more images, process resource 410 and can perform for making the image 412 of adjustment be presented on the instruction on display device 430.Process resource 410 can also perform for providing equipment to control the instruction of 414 via communication subsystem 440 for remote equipment.
In certain embodiments, process resource 410 and can perform and operate various different application program and/or function, such as such as homepage or home base, application program loaded page, message transmit application (such as, SMS message transmit application, e-mail applications, IM application), phone application, game application, calendar application, document application, network browser application, clock application, camera application, media viewing application (such as, for video, image, audio frequency), social media apply, finance and economics apply and equipment setting.
Interactivity augmented reality
Fig. 5 shows by using the mobile computing device being linked to the second computing equipment to operate the illustrative methods of the self-advancing device as augmented reality entity.In one or more embodiments, the method for operating self-advancing device can include controlling self-advancing device motion (502) in real world environments.Control self-advancing device to be performed by mobile computing device.Mobile computing device can include allowing users to the programing function being controlled the motion of self-advancing device by the control being used on the display of mobile computing device presenting or show.
In many examples, method also includes mutual with the second computing equipment providing virtual environment, so that the second computing equipment presents pseudo-entity (504) in virtual environment.Pseudo-entity can correspond to the self-advancing device in real world.In many examples, it is possible to performed mutual by mobile computing device.Such as, the user of mobile computing device can load and make mobile computing device and the mutual application program of the second computing equipment.Alternatively or as addition Item, the second computing equipment can perform the function exchanged with mobile computing device.Such as, the user of the second computing equipment can provide input, and this input makes the second computing equipment and mobile computing device mutual.Second computing equipment can include controller, and thus user can operate controller and mutual with virtual environment.By using controller, then user can promote that the second computing equipment and mobile computing device are mutual.
In many examples, once mobile computing device and the second computing equipment interact, some or all of virtual environment can show on a mobile computing device (506) via the second computing equipment.The pseudo-entity corresponding with self-advancing device can also be present in virtual environment (508).Such as, the feature in virtual environment can represent the self-advancing device in real world.In these and relevant embodiment, the user of mobile computing device can control the self-advancing device in real world and the pseudo-entity in virtual environment simultaneously.In relevant embodiment, virtual environment can be provided as augmented reality, and the pseudo-entity corresponding with self-advancing device can interact in augmented reality.
In many examples, method also includes the event that detection is relevant to self-advancing device in real world environments, then generates virtual events (510) according to the event of detection.Such as, the event of detection can relate to special exercise or the reciprocal action of self-advancing device in real world environments.Event can be detected via interface by mobile computing device or the second computing equipment.Then mobile computing device can be merged into virtual events in virtual environment.After do so, mobile computing device can specifically move this or be converted into virtual events alternately, the such as control in virtual environment or order.Additionally or alternatively, real world event can relate to self-advancing device impact surface.Such shock can correspond to wherein pseudo-entity and performs the virtual events of a function.Function can be configured by user, or can be previously programmed in game or application.
Fig. 6 A-6B display uses the mobile computing device being linked to the second computing equipment 602 to control the self-advancing device 608 example as the pseudo-entity 610 in virtual environment 612.On such as Fig. 6 A, the example system of display can include the mobile computing device 604 communicated with the second computing equipment 602.Mobile computing device 604 can also operate on mobile computing device 608.Communication link 606 between mobile computing device 604 and the second computing equipment 602 can allow to show virtual environment 612 on mobile computing device 604 by the second computing equipment 602.It addition, then mobile computing device 604 can present self-advancing device 608, as the pseudo-entity 610 in virtual environment 612.
With reference to Fig. 6 A, the second computing equipment 602 can pass through 606 virtual environments 612 of communication link and be supplied to mobile computing device 604.Communication link 606 can be created by the mobile computing device 604 mutual with the second computing equipment 602.Second computing equipment 602 can be operable and provide any computing equipment of this kind of environment 612.Such as, the second computing equipment can be desk computer or notebook computer.Alternatively, second computer can be game console, such as those control stations in the XBOX control station series of MICROSOFT exploitation, or those control stations in the PLAYSTATION control station series of SONYCOMPUTERENTERTAINMENT exploitation.Alternatively, the second computing equipment can be able to any computing equipment of making virtual environment 612 communicate with mobile computing device 604.
The virtual environment 612 provided by the second computing equipment can show on mobile computing device 604.Virtual environment 612 can be to carry out, by the second computing equipment, the environment that user develops alternately.Such as, virtual environment 612 can be through the three-dimensional map that on-line computer program provides.Alternatively, virtual environment 612 can be have any number or various customization, real world adjustment or augmented reality map.Additionally or alternatively, virtual environment 612 can be provided by game console, for instance as a part for moving game, mini game, instantaneous game etc..
It addition, mobile computing device 604 can be configured to access the preservation data being associated with virtual environment 612 being stored on the second computing equipment 602.Preserving data such as can be previous mutual corresponding with what carried out in virtual environment 612 by user.Similarly, the second computing equipment 602 can be configured to be sent to mobile computing device 604 preserving data, and mutual is as one man linked to mobile computing device 604 previous in virtual environment 612.Therefore, such as, the user playing the game with virtual environment 612 on the second computing equipment 602 can preserve the data being associated with this game, the data preserved by using mobile computing device 604 to access subsequently, and utilize self-advancing device 608 as the virtual sign 612 (as shown on Fig. 6 B) presented in virtual environment 612.
With reference to Fig. 6 B, self-advancing device 608 can be controlled by mobile computing device 604.As shown, the pseudo-entity 610 representing self-advancing device 608 can be presented as the entity in virtual environment 612 on mobile computing device 604.It addition, mobile computing device 604 can be configured to the event that detection is relevant with the self-advancing device 608 in real world, determine virtual events according to the event detected subsequently.Then virtual events can be merged in virtual environment 612.As an example, as shown in Figure 6B, self-advancing device 608 can be represented as pterosaur symbol in augmented reality environment.The user of mobile computing device 604 can pass through to use control 614 manipulation self-advancing devices 608, thus the mobile dinosaur presented on mobile computing device 604.And, mobile computing device 604 can also allow for there is multiple dependency between the virtual events that the input for self-advancing device 608 is corresponding with the pseudo-entity 610 for self-advancing device 608.
Therefore, in practice, user can by operating self-advancing device 608 and playing a role Anywhere at real world in real world environments.The parallel augmented reality world can be shown on mobile computing device 608 by pseudo-entity 610 corresponding with self-advancing device 608, in virtual environment 612.By using the control 614 for self-advancing device 608, user can operate self-advancing device 608, and arranges it to interact with the augmented reality world 612 via pseudo-entity 610.
Expection the embodiments described herein can independently expand to each element described herein and concept with other concept, theory or system, and each embodiment includes the combination of the element quoted from Anywhere in this application.Although embodiment describes in detail herein with reference to accompanying drawing, it is to be understood that the invention is not restricted to those accurate embodiments.So, many amendment schemes and variation will be apparent from for those skilled in the art.Therefore, the scope of the present invention is intended to be limited by claims below and equivalent replacement thereof.Furthermore it is contemplated that the special characteristic that single or as embodiment a part describes can be combined with a part for the feature of other single description or other embodiments, even if further feature and embodiment do not mention described special characteristic.Therefore, it does not have mention combination and also should not stop that inventor advocates right for this kind of combination.
One or more embodiments described herein specify, computing equipment the method, technology and the action that perform perform with being programmed, or as a kind of computer-implemented method.Programmatically refer to by using code or the executable instruction of computer.The step programmatically performed can be automatic automatically or not.
One or more embodiment described herein can be passed through to use the module of programming or assembly to implement.The module of programming or assembly can include program, subprogram, the part of program or be able to carry out component software or the nextport hardware component NextPort of one or more mentioning of task or function.As used herein, module or assembly independently can be present on nextport hardware component NextPort with other module or assembly.Alternatively, module or assembly can be shared element or the processing procedures of other module, program or machine.
And, one or more embodiments described herein can by using the instruction that can be performed by one or more processors to implement.These instructions can carried on a computer-readable medium.The machine shown in conjunction with following figure or describe provides the example of process resource and the computer-readable medium that can carry and/or perform the instruction for implementing embodiments of the invention on it.Specifically, the many machines shown by the embodiment of the present invention include processor and various forms of, for preserving the memorizer of data and instruction.The example of computer-readable medium includes permanent memory storage device, the hard disk drive on such as personal computer or server.Other example of computer-readable storage medium includes portable storage unit (such as CD or DVD unit), flash memory (such as, carrying on many cell phones and purl machine) or magnetic storage.Computer, terminal, the device (such as, mobile equipment, such as cell phone) that can network are all the examples of the machine and equipment utilizing processor, memorizer and storage instruction on a computer-readable medium.It addition, embodiment maybe can carry the computer of this class method and can implement by the form of bearing medium by computer program.
Although shown embodiment is described herein with reference to accompanying drawing, but the variation of specific embodiment and details is also in scope of the present disclosure.The scope of the present invention is intended to be limited by claims below and their equivalent replacement.And, the special characteristic individually or as the part description of embodiment can be combined with a part for the feature of other single description or other embodiments.Therefore, it does not have describe combination and also should not stop that the present inventor advocates right for this kind of combination.
Although the foregoing describing certain embodiments of the present invention, but it will be understood that described embodiment is only used as example.Therefore, the present invention should not be limited to described embodiment.But, the scope of the present invention described herein should limit only according to the claim made in conjunction with description above and accompanying drawing.

Claims (24)

1. one kind is used for operating the mobile computing device method to control self-advancing device, it is characterised in that described method includes:
Control self-advancing device motion in actual environment;
Mutual with the second computing equipment providing virtual environment, so that described second computing equipment presents pseudo-entity in virtual environment, described pseudo-entity is corresponding with the self-advancing device of motion in real world environments;And
Mutual according to described second computing equipment, shows virtual environment by described second computing equipment on described mobile computing device.
2. the method for claim 1, it is characterised in that described method also includes:
Detect the event relevant to the self-advancing device in real world environments;
Event according to detecting determines virtual events;And
Described virtual events is merged in virtual environment.
3. the method for claim 1, its spy is in that, described method also includes:
Access and be stored in preservation data being associated with virtual environment on described second computing equipment, previous corresponding alternately with virtual environment of described preservation data;And
Described preservation data are sent to described mobile computing device, and previous mutual in virtual environment is as one man linked to described mobile computing device.
4. method as claimed in claim 3, it is characterised in that the previous mutual with current control of described self-advancing device is corresponding with the game play in virtual environment.
5. method as claimed in claim 4, it is characterised in that described game play and mini game, immediately play and play or one or more in moving game is associated.
6. the method for claim 1, it is characterised in that the virtual environment being shown on described mobile computing device is linked to the display of the virtual environment realized by described second computing equipment.
7. the method for claim 1, it is characterised in that virtual environment is corresponding with augmented reality.
8. the method for claim 1, it is characterised in that described mobile computing device is one or more in smart phone, tablet PC or notebook computer.
9. the method for claim 1, it is characterised in that mutual with described second computing equipment is performed by described mobile computing device.
10. the method for claim 1, it is characterised in that described interactive step is performed by described second computing equipment.
11. a system, it is characterised in that described system includes:
Self-advancing device;And
Mobile computing device, described mobile computing device is operably linked to described self-advancing device, and described mobile computing device is configured to mutual with the second computing equipment, and described second computing equipment provides the virtual environment being shown on described mobile computing device;
Wherein said self-advancing device is associated with the entity presented in virtual environment.
12. system as claimed in claim 11, it is characterised in that the event being associated with described self-advancing device is enhanced and is merged in virtual environment as virtual events.
13. system as claimed in claim 11, it is characterized in that, described mobile computing device is configured to access the preservation data being associated with virtual environment being stored on described second computing equipment, alternately corresponding previous with virtual environment of described preservation data.
14. system as claimed in claim 13, it is characterised in that previous mutual in virtual environment is as one man linked to current mutual by user via the described mobile computing device described self-advancing device of operation.
15. system as claimed in claim 14, it is characterised in that described previous mutual with current corresponding with the game play in virtual environment alternately.
16. system as claimed in claim 15, it is characterised in that described game play and mini game, immediately play play or moving game one or more in be associated.
17. system as claimed in claim 11, it is characterised in that virtual environment is corresponding with augmented reality.
18. system as claimed in claim 11, it is characterised in that described mobile computing device is one or more in smart phone, tablet PC or notebook computer.
19. system as claimed in claim 11, it is characterised in that described mobile computing device performs the one or more operations enabling described mobile computing device mutual with described second computing equipment.
20. system as claimed in claim 11, it is characterised in that described second computing equipment performs the one or more operations mutual with described mobile computing device.
21. store a non-transient computer-readable medium for instruction, described instruction causes the one or more processor when being executed by one or more processors:
Allow users to use mobile computing device to control self-advancing device motion in real world environments;
Mutual with the second computing equipment providing virtual environment so that described second computing equipment presents the pseudo-entity corresponding with the self-advancing device in real world environments in virtual environment;And
On described mobile computing device, virtual environment is shown by described second computing equipment.
22. non-transient computer-readable medium as claimed in claim 21, it is characterised in that described instruction also results in the one or more processor:
Detect the event relevant with the self-advancing device in real world environments;
Event according to detecting determines virtual events;And
Described virtual events is merged in virtual environment.
23. non-transient computer-readable medium as claimed in claim 21, it is characterised in that described instruction also results in the one or more processor:
Access and be stored in preservation data being associated with virtual environment on described second computing equipment, described preservation data and virtual environment previously corresponding alternately;And
Described preservation data are sent to described mobile computing device, and previous mutual in virtual environment is as one man linked to described mobile computing device.
24. non-transient computer-readable medium as claimed in claim 21, it is characterised in that described instruction also results in the one or more processor:
Virtual environment is rendered as augmented reality.
CN201480062743.XA 2013-10-15 2014-10-09 Interactive augmented reality using a self-propelled device Pending CN105745917A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US14/054,636 US9827487B2 (en) 2012-05-14 2013-10-15 Interactive augmented reality using a self-propelled device
US14/054,636 2013-10-15
PCT/US2014/059973 WO2015057494A1 (en) 2013-10-15 2014-10-09 Interactive augmented reality using a self-propelled device

Publications (1)

Publication Number Publication Date
CN105745917A true CN105745917A (en) 2016-07-06

Family

ID=52828562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201480062743.XA Pending CN105745917A (en) 2013-10-15 2014-10-09 Interactive augmented reality using a self-propelled device

Country Status (5)

Country Link
EP (1) EP3058717A4 (en)
CN (1) CN105745917A (en)
AU (1) AU2014334669A1 (en)
HK (1) HK1221365A1 (en)
WO (1) WO2015057494A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3119445A1 (en) 2021-02-03 2022-08-05 Adam Pyrométrie "RAKU" electric ceramic kiln on domestic power supply

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040189675A1 (en) * 2002-12-30 2004-09-30 John Pretlove Augmented reality system and method
US20090284553A1 (en) * 2006-11-09 2009-11-19 Parrot Method of defining a game zone for a video game system
US20120307001A1 (en) * 2011-06-03 2012-12-06 Nintendo Co., Ltd. Information processing system, information processing device, storage medium storing information processing program, and moving image reproduction control method
CN103180893A (en) * 2011-08-23 2013-06-26 索尼公司 Method and system for use in providing three dimensional user interface

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7704119B2 (en) * 2004-02-19 2010-04-27 Evans Janet E Remote control game system with selective component disablement
KR100969873B1 (en) * 2008-06-27 2010-07-13 가톨릭대학교 산학협력단 Robot game system and robot game method relating virtual space to real space
US9114838B2 (en) * 2011-01-05 2015-08-25 Sphero, Inc. Self-propelled device for interpreting input from a controller device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040189675A1 (en) * 2002-12-30 2004-09-30 John Pretlove Augmented reality system and method
US20090284553A1 (en) * 2006-11-09 2009-11-19 Parrot Method of defining a game zone for a video game system
US20120307001A1 (en) * 2011-06-03 2012-12-06 Nintendo Co., Ltd. Information processing system, information processing device, storage medium storing information processing program, and moving image reproduction control method
CN103180893A (en) * 2011-08-23 2013-06-26 索尼公司 Method and system for use in providing three dimensional user interface

Also Published As

Publication number Publication date
HK1221365A1 (en) 2017-05-26
EP3058717A4 (en) 2017-07-26
AU2014334669A1 (en) 2016-05-05
EP3058717A1 (en) 2016-08-24
WO2015057494A1 (en) 2015-04-23

Similar Documents

Publication Publication Date Title
US10192310B2 (en) Operating a computing device by detecting rounded objects in an image
US20180296911A1 (en) Interactive augmented reality using a self-propelled device
US11270419B2 (en) Augmented reality scenario generation method, apparatus, system, and device
WO2019153750A1 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
US20190057531A1 (en) Repositioning user perspectives in virtual reality environments
US10761595B2 (en) Content browsing
US11887258B2 (en) Dynamic integration of a virtual environment with a physical environment
KR101692335B1 (en) System for augmented reality image display and method for augmented reality image display
CN108619721A (en) Range information display methods, device and computer equipment in virtual scene
CN110917616B (en) Orientation prompting method, device, equipment and storage medium in virtual scene
EP2847616B1 (en) Surveying apparatus having a range camera
US20200293176A1 (en) 360° video viewer control using smart device
CN113424230A (en) Method, system, and non-transitory computer-readable storage medium for generating animation sequence
KR101983233B1 (en) Augmented reality image display system and method using depth map
CN105745917A (en) Interactive augmented reality using a self-propelled device
JP6554139B2 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
KR20150094338A (en) System and method for providing augmented reality service using of terminal location and pose
KR20180062187A (en) Method and apparatus for controlling displaying of augmented reality contents based on gyro sensor
US20230060150A1 (en) Physical action-based augmented reality communication exchanges
KR102520841B1 (en) Apparatus for simulating of billiard method thereof
CN108310769A (en) Virtual objects adjusting method, device and computer readable storage medium
US11980807B2 (en) Adaptive rendering of game to capabilities of device
US20230191259A1 (en) System and Method for Using Room-Scale Virtual Sets to Design Video Games
US20230062366A1 (en) Handcrafted augmented reality experiences
US20230066686A1 (en) Augmented reality prop interactions

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1221365

Country of ref document: HK

WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160706

WD01 Invention patent application deemed withdrawn after publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1221365

Country of ref document: HK