CN104346612A - Information processing apparatus, and displaying method - Google Patents

Information processing apparatus, and displaying method Download PDF

Info

Publication number
CN104346612A
CN104346612A CN201410341965.7A CN201410341965A CN104346612A CN 104346612 A CN104346612 A CN 104346612A CN 201410341965 A CN201410341965 A CN 201410341965A CN 104346612 A CN104346612 A CN 104346612A
Authority
CN
China
Prior art keywords
information
content
display
signal conditioning
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410341965.7A
Other languages
Chinese (zh)
Other versions
CN104346612B (en
Inventor
渡边裕树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Publication of CN104346612A publication Critical patent/CN104346612A/en
Application granted granted Critical
Publication of CN104346612B publication Critical patent/CN104346612B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

An information processing apparatus includes: a calculation unit configured to calculate, based on figure of a reference object recognized from an input image, positional information indicating a positional relationship between the reference object and an imaging position of the input image; and a determination unit configured to select, based on the positional information, at least one display data from among a plurality of pieces of display data associated with the reference object.

Description

Signal conditioning package and display packing
Technical field
Disclosed technology is related to the technology that user provides information in embodiments.
Background technology
There is following technology: the model data of the three dimensional object be placed in the virtual three dimensional space corresponding with real space be superimposed upon on the seizure image caught by imaging device.This technology is called as augmented reality (AR) technology etc., because it enhance the information of being collected by the perception (such as visually-perceptible) of people.The model data of the three dimensional object be placed in the virtual three dimensional space corresponding with real space is called AR content.
AR content has position in the Virtual Space corresponding with real space, and is superimposed upon on seizure image by the projected image of AR content based on the position in Virtual Space.The projected image of AR content is generated based on the position relationship between imaging device and AR content.
The references object existed in real space is for determining the position relationship between imaging device and AR content.Such as, AR mark is commonly used as references object.That is, if detect that AR marks in the seizure image caught by imaging device, then the picture marked based on the AR in the seizure image of imaging device decides the position relationship between AR mark with imaging device.Then, owing to reflecting position relationship, so superposition and AR mark the projected image (Japanese national see international patent application no 2010-531089 and International Publication pamphlet WO2005-119539 is open) of corresponding AR content on seizure image.
There is known following A R server: the positional information based on the terminal device to AR server request content depends on the content (see No. 2012-215989th, Japanese Patent Publication) content being sent request and be dealt into and where select to provide for end device.The positional information used by terminal device comprise received by GPS positional information, inputted by the positional information of the terminal device of radio base station identification and user, such as address, telephone number and postcode.
Summary of the invention
Technical matters
Above-mentioned AR server can mark X according to whether identifying in area under one's jurisdiction A or area under one's jurisdiction B AR and different content is sent to terminal device.
But the position relationship that AR server can not mark between X according to user and AR provides different content.Such as, even if be present in the left of mark X or right to the useful content of user according to user and change, AR server can not select the content that will provide for user.
The object of public technology according to the present embodiment provides according to the position relationship information of carrying out between image space and mark.
The solution of problem
According to an aspect of the present invention, a kind of signal conditioning package, comprise: computing unit, this computing unit is configured to carry out calculating location information based on the profile of the references object from input picture identification, the position relationship between this positional information instruction references object and the image space of input picture; And determining unit, this determining unit is configured to position-based information, shows data select at least one to show data from many that are associated with references object.
Beneficial effect
According to an aspect of the present invention, information can be provided by the environment residing for user.
Accompanying drawing explanation
Fig. 1 depicts the example of use scenes according to the present embodiment;
Fig. 2 depicts the seizure image caught by user 111;
Fig. 3 depicts the display image for user 111 provides;
Fig. 4 depicts the seizure image caught by user 112;
The tradition that Fig. 5 depicts when not considering the position relationship between mark M and user 112 shows image;
It is display image that user 112 provides that Fig. 6 depicts in present embodiment;
Fig. 7 depicts the seizure image caught by user 113;
It is display image that user 113 provides that Fig. 8 depicts in present embodiment;
Fig. 9 depicts the structure of system;
Figure 10 is the functional block diagram of the signal conditioning package 1 according to the first embodiment;
Figure 11 depicts the relation between camera coordinates system and mark coordinate system;
Figure 12 depicts the rotation matrix R be tied to from mark coordinate the transform matrix M of camera coordinates system and transform matrix M;
Figure 13 depicts rotation matrix R1, rotation matrix R2 and rotation matrix R3;
Figure 14 is the figure for describing positional information;
Figure 15 depicts the example of the AR content C in camera coordinates system and mark coordinate system;
Figure 16 A and Figure 16 B depicts the example of the structure of correspondence relationship information table;
Figure 17 depicts the example of the data structure of AR table of contents information;
Figure 18 depicts the example of the data structure of Template Information table;
Figure 19 is the functional block diagram of the management devices according to the first embodiment;
Figure 20 shows the process flow diagram of the information providing method according to the first embodiment;
Figure 21 is the functional block diagram of the signal conditioning package according to the second embodiment;
Figure 22 is the functional block diagram of the management devices 4 according to the second embodiment;
Figure 23 depicts the process flow diagram of the information providing method according to the second embodiment;
Figure 24 depicts the example of the hardware configuration of the signal conditioning package according to embodiment;
Figure 25 depicts the example of the structure of the program of operation in computing machine 300; And
Figure 26 depicts the example of the hardware configuration of the management devices according to embodiment.
Reference numerals list
1,3 signal conditioning packages
10,30 communication units
11,31 control modules
12,32 image-generating units
13,33 storage unit
14,34 display units
2,4 management devices
20,40 communication units
21,41 control modules
22,42 storage unit
Embodiment
Embodiment according to present disclosure will be described in detail belows.Unless there is contradiction between the process of following embodiment, otherwise suitably can mutually combine following embodiment.Below with reference to accompanying drawings embodiment is described.
The use scenes that present embodiment can be applicable to is described below.Fig. 1 depicts the example of use scenes according to the present embodiment.Fig. 1 shows the figure of real space.Such as, there is cylindrical tube 101, mark M is attached to this cylindrical tube 101.Mark M is the example of the references object for determining the position relationship between imaging device and AR content.Only expect that with reference to Object identifying be reference based on its shape.Valve 102 and sensor 103 are arranged on pipe 101.
In FIG, the user being present in diverse location place is assumed that user 111, user 112 and user 113.The same people that user 111, user 112 and user 113 can be different people or exist at different time.User 111, user 112 and user 113 have signal conditioning package, and this signal conditioning package comprises imaging device and the mark of the seizure in corresponding position M.
As described below, the image that signal conditioning package catches from imaging device carrys out identification marking M.Then, the AR content corresponding with mark M is superimposed upon the display image on image by signal conditioning package generation, and shows this display image over the display.User 111, user 112 and user 113 can grasp the details of AR content by reference to the display image corresponding with the seizure image of individual.
The information provided for user by AR content is supplemented, strengthen and add the information be present in real space.Such as, the information provided by AR content comprises the operational detail, description, prompting etc. that will be applied to the object be present in real space.
Such as, user 111 catches mark M at the position imaging device being substantially in mark M front.Imaging device is included in the signal conditioning package that user 111 holds.As described below, signal conditioning package is smart phone or dull and stereotyped PC.
Fig. 2 depicts the seizure image caught by user 111.Fig. 3 depicts the display image for user 111 provides.In the display image that Fig. 3 describes, AR content is superposed on seizure image in fig. 2.
Such as, as described in fig. 2, catching image and comprising pipe 101, valve 102 and sensor 103.On the other hand, as described in figure 3, the display image created by the process described after a while also comprises AR content.When being associated with AR content C1 and AR content C2 by mark M in advance, generate the display image as described in figure 3.That is, as described in Fig. 3, except pipe 101, mark M, valve 102 and sensor 103, display image also comprises AR content C1 and AR content C2.
For AR content C1 and AR content C2, text data (such as balloon " confirming that valve is cut out ") is defined as the model data in three dimensions.In addition, for AR content C1 and AR content C2, the placement relative to mark M is specified in advance, such as position and rotation.
Such as, the founder creating AR content is pre-created the model data of the balloon that display " confirms that valve is cut out ", makes the position of the valve 102 existed in real space shows AR content C1.
Therefore, when signal conditioning package identifies mark M, according to the positional information between user 111 and mark M and preassigned placement information, seizure image shows AR content C1.Details will be described after a while.When after creating AR content C1, user 111 catches mark M, display AR content C1 " confirms that valve is cut out ".Therefore, user 111 can check with reference to AR content C1 the valve 102 existed in real space.This is also identical in AR content C2.
Then, the situation of the user 112 described in FIG will be described.Because user 112 is present in the left of mark M, so the imaging device of signal conditioning package that user 112 holds catches mark M from left.Fig. 4 depicts the seizure image caught by user 112.As described in the diagram, catching image and comprising pipe 101, mark M and valve 102.Note, due to the position of user 112, sensor 103 is not included in seizure image.
The tradition that Fig. 5 depicts when not considering the position relationship between mark M and user 112 shows image.As in figure 3, catching in image the AR content C1 and AR content C2 that show uniquely and be associated with mark M.Because the position relationship between mark M and user 112 is different from position relationship mark between M and user 111, so AR content C1 is showing the display position in image and AR content C2 to show the display position in image different.
Because in real space, sensor 103 is sightless concerning user 112, even if so when seeing that AR content C2 " checks the value of sensor ", user 112 can not hold will check for which sensor.Because sensor 103 is sightless, so user 112 may forget carry out the work relevant to AR content C2.That is, do not utilize fully there is the AR technology of following object: by providing the visually-perceptible that the details of AR content is verified user and the information of collecting for user.
Therefore, in the present embodiment, when considering position relationship for user 112 provides according to the suitable AR content of position relationship between mark M and user 112.It is display image that user 112 provides that Fig. 6 depicts in present embodiment.Fig. 6 comprises the pipe 101 be present in real space, mark M, valve 102, AR content C3 and AR content C4.AR content C3 and AR content C4 provides suitable information based on the position relationship between user 112 and mark M when considering environment (visual angle) of user 112.
Such as, AR content C3 " confirms that front valve is cut out ", and pointer is to the work of valve 102, and this valve 102 is present in the front of user 112.AR content C4 " sensor overleaf " indication sensor 103 is present in the blind spot of user 112.AR content C4 can promote to move to sensor 103 for visible place, or can promote the value of the sensor 103 checking the back side.
As mentioned above, the information of AR content C1 and AR content C2 for user 111 environment (visual angle) be suitable, and the information of AR content C3 and AR content C4 for user 112 environment (visual angle) be suitable.Therefore, present embodiment suitably selects the AR content that will show according to the position relationship between each user and mark M.Note, the AR content being set the condition relevant to position relationship by founder and provided according to each condition in advance.
Then, the situation of the user 113 described in FIG will be described.Because user 113 is present in the right of mark M, so the imaging device of signal conditioning package that user 113 holds catches mark M from right.Fig. 7 is the seizure image caught by user 113.As described in the figure 7, catching image and comprising pipe 101, mark M and sensor 103.Note, due to the position of user 113, valve 102 is not included in seizure image.
It is display image that user 113 provides that Fig. 8 depicts in present embodiment.Fig. 8 comprises the pipe 101 be present in real space, mark M, sensor 103, AR content C5 and AR content C6.AR content C5 and AR content C6 provides the adequate information of the environment (visual angle) according to user 113 based on the position relationship between user 113 and mark M.
Such as, AR content C5 " valve overleaf " indicator valve 102 is present in the blind spot of user 113.AR content C6 " checks the value of upfront sensor ", and pointer is to the work of sensor 103, and this sensor 103 is present in the front of user 113.AR content C5 can promote to move to valve 102 for visible place, or can promote the opening/closing state of the valve 102 checking the back side.
As mentioned above, one of display image described in Fig. 3, Fig. 6 and Fig. 8 depends on the position relationship between each user and mark.Note, customer location is the image space caught seizure image.AR content is the example of display information described later.Hereafter in detail the process of embodiment will be described.
[the first embodiment]
First, will describe according to the detailed process of the signal conditioning package of the first embodiment and structure etc.Fig. 9 depicts the structure of system.In the example of figure 9, communication terminal 1-1 and communication terminal 1-2 is expressed as signal conditioning package.These signal conditioning packages are jointly expressed as signal conditioning package 1.In addition, signal conditioning package 1 communicates with management devices 2 via network N.Signal conditioning package 1 is computing machine, such as dull and stereotyped PC or smart phone.Management devices 2 is such as server computers, and management information process device 1.Such as, network N is the Internet.System according to the present embodiment comprises signal conditioning package 1 and management devices 2.
Signal conditioning package 1 shows the display image according to the position relationship between user and mark under the control of management devices 2.Such as, signal conditioning package 1 provides the AR content according to the position relationship between user and mark.As a result, display image comprises the suitable AR content according to user environment.
Then, by the functional structure of descriptor treating apparatus 1.Figure 10 is the functional block diagram of the signal conditioning package 1 according to the first embodiment.Signal conditioning package 1 comprises communication unit 10, control module 11, storage unit 13 and display unit 14.Signal conditioning package 1 can also comprise image-generating unit 12.Because image-generating unit 12 is imaging devices, so customer location is the position of signal conditioning package 1.
If signal conditioning package 1 does not have image-generating unit 12, then communication unit 10 can obtain from another imaging device and catch image.But in this case, customer location is the image space of this another imaging device, instead of the position of signal conditioning package 1.
Communication unit 10 communicates with another computing machine.Such as, communication unit 10 communicates with management devices 2.Then, communication unit 10 receives AR content information, Template Information and correspondence relationship information from management devices 2.Although will provide detailed information after a while, AR content information will be for limiting AR content.Template Information is for drawing the model data of AR content.The condition relevant with the position relationship between marking with image space is associated with the AR content provided under this condition by correspondence relationship information.
Control module 11 controls the various types of process performed by signal conditioning package 1.Such as, control module 11 receives the seizure image that caught by image-generating unit 12 as input picture, and provides according to the position relationship information of carrying out between image space and mark.Control module 11 comprises recognition unit 15, computing unit 16, determining unit 17 and generation unit 18.
Recognition unit 15 identifies references object from input picture.That is, in the present embodiment, recognition unit 15 identification marking.Method for the correlation technique identifying object is applicable to the method for identification marking.Such as, recognition unit 15 carrys out identification marking by using the template limited the shape of mark to carry out template matches.
In addition, when identifying mark (i.e. references object) and being included in imaging device, recognition unit obtains Tag ID.Tag ID is the identification information of identification tag.Method for the correlation technique obtaining Tag ID is applicable to the method obtaining Tag ID.Such as, when references object is mark, from the white/black of such as two-dimensional bar code is arranged, obtain unique Tag ID.
Computing unit 16 carrys out calculating location information based on the picture of the references object identified by recognition unit 15, the position relationship between this positional information instruction references object and the image space of input picture.Such as, computing unit 16 carrys out calculating location information based on the shape of the picture of the references object (mark) in input picture.
Outward appearance (picture) based on the references object caught in image obtains the position relationship in real space between imaging device and references object.Because the shape, texture etc. of references object are known, if so with reference to the picture of object compared with known form or known texture, then the position of the references object relative to camera can be determined.
Method for calculating location information will be described in detail belows.First, camera coordinates system and mark coordinate system will be described.Figure 11 depicts the relation between camera coordinates system and mark coordinate system.
The mark M described in Figure 11 is the example of the references object for showing AR content.The mark M described in Figure 11 is square, and is previously determined its size (such as, the limit of 10 centimetres).Although the mark M described in Figure 11 is square, as long as shape makes the picture identifiable design caught based on any one the viewpoint place even in multiple viewpoint relative to the position of camera and orientation, another references object with this shape just can be used.
Camera coordinates system comprises three-dimensional coordinate (Xc, Yc, Zc), and such as, the focus of camera is used as initial point (initial point Oc).In the present embodiment, camera is the example of the imaging device be included in signal conditioning package 1.Such as, the Xc-Yc plane of camera coordinates system is parallel to the image pick up equipment plane of camera, and Zc axle (in the depth direction) is orthogonal to image pick up equipment plane.
Mark coordinate system comprises three-dimensional coordinate (Xm, Ym, Zm), and such as, initial point (initial point Om) is used as at the center of mark M.Such as, the Xm-Ym plane of mark coordinate system is parallel to the pattern print surface of the texture of mark M, and Zm axle is orthogonal to the print surface of mark M.The initial point Om of mark coordinate system is expressed as the V1c (X1c, Y1c, Z1c) in camera coordinates system.
The rotation angle of the mark coordinate system relative to camera coordinates system is expressed as rotational coordinates G1c (P1c, Q1c, R1c).P1c represents the rotation angle around Xc axle, and Q1c represents the rotation angle around Yc axle, and R1c represents the rotation angle around Zc axle.The mark M described in Figure 11 only rotates, so P1c and R1c is 0 in this case around Ym axle.
Figure 12 depicts the transform matrix M being tied to camera coordinates system from mark coordinate, and the rotation matrix R in transform matrix M.Transform matrix M is 4 × 4 matrixes.Transform matrix M is calculated based on the position coordinates V1c (X1c, Y1c, Z1c) of the mark in camera coordinates system and rotational coordinates G1c (P1c, Q1c, R1c).
Rotation matrix R indicates the mark M (square shape) how rotating and have known form to match with the mark M in input picture.Therefore, rotation matrix R is obtained as rotation matrix R1, the rotation matrix R2 of each axle and the product of rotation matrix R3.Depict rotation matrix R1, rotation matrix R2 and rotation matrix R3 in fig. 13.Figure 13 depicts rotation matrix R1, rotation matrix R2 and rotation matrix R3.
Here, perform rotation to the coordinate applying portion matrix (rotation matrix R) in camera coordinates system, to make the orientation of camera coordinates system match with the orientation of mark coordinate system, above-mentioned part matrix comprises the inverse matrix M of transform matrix M -1row 1 to row 3 and row 1 to row 3.(it comprises inverse matrix M to part matrix -1row 1 to row 3 and row 4) application perform conversion operations, this conversion operations adjustment camera coordinates system orientation and relative to mark coordinate system position.
After generating transformation matrix, computing unit 16 obtains column vector Am (Xm, Ym, Zm, 1), as the inverse matrix M being tied to the transform matrix M of camera coordinates system from mark coordinate -1with the product of column vector Ac (Xc, Yc, Zc, 1).Particularly, computing unit 16 uses expression formula 1 below to obtain column vector Am (Xm, Ym, Zm, 1).
Am=M -1·Ac (1)
If be presumed to image position substantially to match with the initial point of camera coordinates system, then image space is (0,0,0).Therefore, if column vector (0,0,0,1) is distributed to Ac, then can determine which point marked in coordinate system is corresponding with the initial point of camera coordinates system according to expression formula 1.
Assuming that the point corresponding with the initial point of camera coordinates system is U (Xu, Yu, Zu) in mark coordinate system.Point U is that 1 dimension value to 3 dimension of the column vector Au (Xu, Yu, Zu, 1) obtained by expression formula 1 is worth determined point.
Then, computing unit 16 calculates the positional information in the direction of instruction and mark based on a U.Figure 14 is the figure for describing positional information.As described in fig. 14, in the present embodiment, computing unit 16 obtain about mark coordinate system X-axis, relative to the direction θ u of image space of the Xm-Ym plane of mark.
In the present embodiment, use following expression formula 2 to obtain cos θ u, and cos θ u is used as positional information.Positional information θ u can be obtained by conversion cos θ u.Also the U (Xu, Yu, Zu) not depending on direction θ can be used.
cos θu = Xu Yu 2 + Zu 2 - - - ( 2 )
Calculating location information can be carried out by being different from above-mentioned method.Such as, computing unit 16 generates positional information " image space is in the right of references object " or positional information " image space is in the left of references object " based on the picture of references object.Such as, when use is seen as foursquare references object from front, if the certain ratio shorter in right side of the left side in the picture of the references object in input picture, then computing unit 16 generates " image space is in the right of references object ".That is, computing unit 16 based on the known form of references object by the length in left side being calculated compared with the length on right side instruction image space relative to the positional information of the position of references object.After a while the determining unit 17 of description switched according to positional information and select display information.
Turn back to Figure 10, determining unit 17 determines display information according to the positional information calculated by computing unit 16.Such as, determining unit 17 identifies the condition corresponding with positional information with reference to correspondence relationship information.Then, determining unit 17 identifies the display information (AR content) corresponding with the condition in correspondence relationship information.Via such as generation unit 18 for user provides identified display information.Correspondence relationship information will be described below.
Generation unit 18 generates display image based on display information and input picture.Such as, generation unit 18 generates display image based on AR content information and Template Information, and the AR content identified by determining unit 17 in this display image is applied over an input image.AR content information and Template Information will be described below.
Use description to the method generating display image.AR content is the model data comprising multiple point.For multiple plane sets texture or image, the plurality of plane obtains by carrying out the multiple point of interpolation with straight line or curve, and synthesizes the plurality of plane to form three-dimensional modeling data.
In the placement of AR content, limit the coordinate of the point be included in AR content relative to the references object be present in real space.On the other hand, as mentioned above, the position relationship in real space between camera and references object is obtained based on the outward appearance (picture) by the references object in the seizure image of cameras capture.
Therefore, can based on the position relationship between the coordinate obtaining the point in camera and AR content relative to the coordinate of references object and the position relationship between camera and references object.Then, if based on the point in camera and AR content coordinate between position relationship generate the picture (projected image) of the AR content that the AR content be present in Virtual Space is captured so obtained.Camera coordinates system is described above with mark coordinate system.
Figure 15 depicts the example of the AR content in camera coordinates system and mark coordinate system.The AR content C1 described in Figure 15 is the model data of balloon, and the text data comprised in speech balloon " confirms that valve is cut out ".Assuming that the stain at the summit place of the speech balloon of AR content C1 is the reference point of AR content C1.Assuming that the coordinate in the mark coordinate system of the reference point of AR content C1 is V1m (X1m, Y1m, Z1m).In addition, determined the orientation of AR content C1 by rotational coordinates G1m (P1m, Q1m, R1m), and determine the size of AR content C1 by magnification D1m (J1x, J1y, J1z).
By the coordinate based on reference point coordinate V1m, rotational coordinates G1m and magnification D1m adjustment point of restriction in the prototype (AR template) of AR content C1, obtain the coordinate of the point be included in AR content C1.
Such as, the coordinate of the reference point limited in AR template is (0,0,0).By the rotation based on setting rotational coordinates G1m, the amplification based on magnification D1m or reduction and the coordinate adjusting the point be included in AR template based on the parallel displacement of reference point coordinate V1m.By obtaining the AR content C1 in Fig. 3 based on the following configuration point be included in AR template: adjust described point based on reference point coordinate V1m, rotational coordinates G1m and magnification D1m in the mark coordinate system of mark M.
For AR content C1, set the information relevant with the coordinate of the reference point marked in coordinate system and the placement of rotational coordinates based on the AR content information described after a while in advance.In addition, the Template Information that be applied to the template of AR content C1 has the information of the point comprised in a template.
Therefore, the coordinate conversion in the mark coordinate system of the point in AR content is become the coordinate in camera coordinates system by generation unit use AR content information and Template Information.In addition, the coordinate conversion in camera coordinates system is become the position (coordinate in screen coordinate system) in display screen.The projected image of AR content C1 is generated based on changed coordinate.
By carrying out the coordinate conversion (Model-View conversion) based on coordinate V1c and rotational coordinates G1c for the coordinate in mark coordinate system, calculate the coordinate in camera coordinates system.Such as, when carrying out Model-View conversion for coordinate V1m, the coordinate V2c (X2c, Y2c, Z2c) in the camera coordinates system of reference point is obtained.
The conversion being tied to camera coordinates system from mark coordinate is carried out based on the determinant described in Figure 12 and Figure 13.Generation unit 18 obtains column vector Ac (Xc, Yc, Zc, 1) by the transition matrix M being tied to camera coordinates system from mark coordinate is multiplied by column vector Am (Xm, Ym, Zm, 1).
Ac=M·Am (3)
Column vector Am (Xm is distributed to by the point coordinate that will stand in the mark coordinate system of coordinate conversion (Model-View conversion), Ym, Zm, 1) and carry out matrix operation, obtain the column vector Ac (Xc of the point coordinate comprised in camera coordinates system, Yc, Zc, 1).
The application of the part matrix (rotation matrix R) described in Figure 12 performs rotation, and the mark orientation of coordinate system and the orientation of camera coordinates system are matched, and above-mentioned part matrix comprises row 1 to row 3 and row 1 to the row 3 of transition matrix M.The application of part matrix (it comprises row 1 to row 3 and the row 4 of transition matrix M) performs conversion, and this conversion have matched the mark orientation of coordinate system and the position relative to camera coordinates system.
As mentioned above, changed by the Model-View based on transition matrix M, the coordinate (Xm, Ym, Zm) in the mark coordinate system of the point be included in AR content C is converted to the coordinate (Xc, Yc, Zc) in camera coordinates system.Changed by Model-View and convert the position coordinates V1m described in Figure 15 to position coordinates V2c.
Then, the transparent translation of the projected position determining the AR content shown in image is used description to.Transparent translation is the coordinate conversion being tied to screen coordinate system from camera coordinates.Screen coordinate system comprises two-dimensional coordinate (Xs, Ys), and such as, initial point (initial point Os) is used as at the center of the seizure image obtained by carrying out catching with camera.By transparent translation, the coordinate conversion in the camera coordinates system of the point in AR content C is become screen coordinate system.The projected image of AR content C is generated based on the coordinate in the screen coordinate system obtained by transparent translation.
Such as, the focal distance f based on camera carries out transparent translation.The Xs coordinate of the coordinate in the screen coordinate system corresponding with the coordinate (Xc, Yc, Zc) in camera coordinates system is calculated by expression formula 4 below.The Ys coordinate of the coordinate in the screen coordinate system corresponding with the coordinate (Xc, Yc, Zc) in camera coordinates system is calculated by expression formula 5 below.
Xs=f·Xc/Zc (4)
Ys=f·Yc/Zc (5)
Transparent translation based on (in the camera coordinates system) position coordinates by point included in AR content C obtains (in screen coordinate system), and position coordinates generates the projected image of AR content C.The Template Information being applied to the template of AR content C defines the plane that the point that is interpolated to create plane and texture (or image) are mapped to.By according to the restriction of AR template by texture or image mapped to by the interpolation of (in screen coordinate system) position coordinates and the plane obtained, generate the projected image of AR content C.
Calculated the coordinate on the seizure image corresponding with the coordinate marked in coordinate system by the conversion of above-mentioned Model-View and transparent translation, and use this coordinate to generate the projected image of the AR content C of the viewpoint according to camera.When by the projected image of AR content C with when catching Images uniting, synthesized image appears on screen as the 3-D view of visual information providing expansion to user.
Or, the projected image of AR content C can be shown on transparent display, as another example of AR content display.Due to even in this, the image in the real space that user is obtained by display is coordinated mutually with the projected image of AR content, think that user provides the visual pattern of expansion.When using transparent display, the projected image that can suppose the AR content in present embodiment and the compound display catching image can be assumed that the display of AR content.
Above-mentioned process is applied to the AR content corresponding with marking M and generates superimposed image, in this superimposed image, the projected image of AR content is superimposed upon on the seizure image in real space.Generation unit 18 can use the transition matrix generated by computing unit 16 as transition matrix M.
Turn back to the description of Figure 10, image-generating unit 12 pairs of images catch.Then, seizure image is input to control module 11 by image-generating unit 12.Image-generating unit 12 catches image with predetermined frame period.
Storage unit 13 stores various types of information under control of the control unit 11.Storage unit 13 stores correspondence relationship information, AR content information and Template Information.Display unit 14 shows image, the display image such as generated by generation unit 18.
Various types of information will be described below.Figure 16 A and Figure 16 B depicts the example of the structure of correspondence relationship information table.Correspondence relationship information table stores correspondence relationship information.Figure 16 A depicts for the first correspondence relationship information table managed with mark corresponding condition.Figure 16 B depicts the second correspondence relationship information table for managing the condition relevant with positional information and display information provided under this condition.
First, the corresponding relation between Tag ID and condition ID set up by the first correspondence relationship information table, and stores them.Condition ID is the identification information of identification condition.Such as, Figure 16 A represents these three condition ID of P1, P2 and P3 is applied to Tag ID M1.
Secondly, the corresponding relation between the second correspondence relationship information table set up the condition ID, condition and content ID, and store them.Condition ID is the identification information of mark AR content.Such as, Figure 16 B represents when positional information cos θ u is equal to or greater than cos45 ° and is equal to or less than cos75 °, and application has the condition of condition ID P1.When cos θ u is equal to or greater than cos45 ° and is equal to or less than cos75 °, this condition can be applied.
Example in Figure 16 B also represents, if positional information is applicable to this condition, then provides the AR content corresponding with content ID C3 and content ID C4.Such as, for the user 112 in Fig. 1, when θ u is 60 °, provide AR content C3 and AR content C4.When determining unit 17 identifies the information that will provide with reference to correspondence relationship information table and then generation unit 18 generates display image, for user 112 provides the image of display as depicted in figure 6.
In the example of Figure 16 B, when scope is 0 ° to 45 ° and scope is 135 ° to 180 °, condition is not had to be suitable for.This is because the shape of mark M is discernible scope is extend to 135 ° from 45 °.But if discernible scope is wider, then the scope that will be set to condition becomes wider.
When using U (Xu, Yu, Zu) as positional information, the allowed band of the value on each axle is defined as condition.In addition, condition can be defined as with the distance, height etc. of mark M together with θ.Although Figure 16 illustrates for each condition to provide the example of any AR content, this is not restrictive.Such as, AR content C1 and AR content C2 can be shown under certain condition, and do not show AR content in an other condition.
Use in example at another, assuming that have dangerous goods away from the positional distance D1 place of mark in the direction d 2.At this moment, if will be defined as condition away from the region around the positional distance D1 of mark in the direction d 2, and positional information meets this condition, then AR content " carefully " can be provided as display information.On the other hand, the region except this peripheral region, position is defined as another condition, and positional information meets this another condition, then do not show AR content.
Figure 17 depicts the example of the data structure of AR table of contents information.AR table of contents information stores AR content information.AR content information comprises content ID, Tag ID, template ID, placement information, magnification information, supplementary etc. about every bar AR content.
Content ID is the information identifying every bar AR content.Tag ID is the information of each AR mark of mark.Template ID is the information of mark template.In a template, shape and the texture of the three-dimensional model of the AR content be presented in display image is defined.
Placement information is the information relevant with rotation information with the placement location of the AR content relative to references object (such as AR marks).Particularly, placement information comprises AR and marks position coordinates in coordinate system and rotational coordinates.Magnification information is the information for limiting amplification in accordance with the three-dimensional model of template or reduction.Supplementary is relevant to every bar AR content.Such as, supplementary is the detecting information that will show in accordance with the three-dimensional model of template.
In fig. 17, such as, the AR content with content ID C1 marks with the AR with Tag ID MA and is associated, and is represented the AR content with content ID C1 by the three-dimensional model that template ID T1 limits.Based on the position coordinates (X1m being used as reference, Y1m, Z1m), rotational coordinates (P1m, Q1m, and magnification (J1x R1m), J1y, J1z), the AR Content placement with content ID C1 is catching on image by the position relationship between marking according to camera and AR.In addition, detecting information " confirmation valve is cut out " is placed in the three-dimensional model in accordance with template ID T1 by the AR content with content ID C1.
Figure 18 depicts the example of the data structure of Template Information table.Template Information table storing template information.Template Information comprises the planar configuration data T12 of the template ID of AR template, the coordinate data T11 on the summit of each AR template and the plane of each AR template.The information being included in each plane in planar configuration data comprises the summit order of the order on the summit being used to indicate plane and the explanation of texture ID.Texture ID represents the identification information (identification information of image file) of the texture that will be mapped to plane.Such as, the reference point of AR template is the 0th summit.
When recognition unit 15 identifies that the AR corresponding with the Tag ID MA in the seizure image obtained from image-generating unit 12 marks, computing unit 16 calculating location information.If positional information is cos60 °, then based on correspondence relationship information, AR content C3 and AR content C4 is identified as the display information that will provide.Then, generation unit 18 uses the AR content information in Figure 17 and the Template Information in Figure 18 to generate the display image (see Fig. 6) comprising AR content C3 and AR content C4.That is, catch being placed on according to the projected image of AR content of the position relationship between image space and mark on image.
Then, by the functional structure of the management devices 2 in description Fig. 9.Figure 19 is the functional block diagram of the management devices 2 according to the first embodiment.Management devices 2 comprises communication unit 20, control module 21 and storage unit 22.Communication unit 20 communicates with another device.Such as, communication unit 20 communicates with signal conditioning package 1, and sends AR content information, Template Information and correspondence relationship information.
Control module 21 controls the various types of process performed by management devices 2.Such as, when receiving the request for various types of information from signal conditioning package 1, control module 21 reads information from storage unit 22, controls communication unit 20, and information is sent to signal conditioning package 1.In the present embodiment, when receiving the request from signal conditioning package 1 for the first time, AR content information, Template Information and correspondence relationship information are sent to signal conditioning package 1.Even if this is because there is the place that communication radio ripple can not arrive in this facility when the user with signal conditioning package 1 checks in facility, the inspection using AR also can be enabled.
Such as, control module 21 can according to signal conditioning package 1 where or the user of operation information treating apparatus 1 determine AR content information, Template Information and the correspondence relationship information that will send.Such as, control module 21, with reference to management information, extracts AR content information, Template Information and the correspondence relationship information relevant with the local mark around residing for signal conditioning package 1 from storage unit 22.
Management devices 2 can provide various types of information about following mark for signal conditioning package 1: this mark is relevant to the scene that the scene information by preparing the working regulation describing each work in advance is selected by signal conditioning package 1.Scene information comprises and the relevant place of work and the Tag ID of mark settled in this place.
Storage unit 22 stores AR content information, Template Information and correspondence relationship information.In addition, if needed, storage unit 22 can store the management information about the installation site of each mark and the scene information about scene.According to the request from signal conditioning package 1, a part of information in the AR content information, Template Information and the correspondence relationship information that are stored in storage unit 22 or full detail are stored in the storage unit 13 of signal conditioning package 1.
Then, by the treatment scheme of description information providing method according to the present embodiment.Figure 20 shows the process flow diagram of the information providing method according to the first embodiment.Before the process described in flowchart, assuming that signal conditioning package 1 obtains AR content information, Template Information and correspondence relationship information from management devices 2, and these information are stored in storage unit 13.
First, recognition unit 15 obtains the seizure image that caught by image-generating unit 12 as input picture (operation 1).Then, recognition unit 15 determines whether recognize references object (mark) (operation 2) from input picture.Such as, about whether comprising the object that matches with reference template in input picture and making a determination.If do not recognize references object (being no in operation 2), then signal conditioning package 1 ending message provides process.
On the contrary, if recognize references object (operation 2 in be yes), then recognition unit 15 obtains the identification information of references object further, and outputs to computing unit 16 with reference to the recognition result of object and identification information.Such as, the recognition result of the identification of cue mark and Tag ID thereof are outputted to computing unit 16 by recognition unit 15.Then, computing unit 16 calculates the positional information (operation 3) of the position relationship be used to indicate between references object and image space based on the picture of the references object in input picture.Above-mentioned computing method etc. are used as the method for calculating location information.Then, positional information and Tag ID are outputted to determining unit 17 by computing unit 16.
Determining unit 17 selects display information (operation 4) based on correspondence relationship information, positional information and Tag ID.Such as, determining unit 17 identifies the condition ID being applied to Tag ID according to the first correspondence relationship information, and searches for the institute condition for identification ID of the condition corresponding with the position indicated by positional information.Then, determining unit 17 identifies the AR content corresponding with the condition found.Then, identified display information is outputted to generation unit 18 by determining unit 17.
Generation unit 18 generates display image (operation 5) based on AR content information, Template Information and input picture.Above-mentioned generation method etc. is used to generate display image.Then, display unit 14 shows (operation 6) display image.Now, signal conditioning package 1 ending message provides process.
As mentioned above, signal conditioning package 1 according to the present embodiment can the environment information of carrying out residing for user provide.That is, due to based on the position relationship between references object and image space for user provides information, so provide the information depending on environment based on references object.
[the second embodiment]
In the first embodiment, signal conditioning package selects the information that will provide.In this second embodiment, management devices selects the information that will provide.Such as, management devices is selected to be superimposed on the AR content on display image.Then, AR content information and Template Information are sent to signal conditioning package by management devices.As mentioned above, in this second embodiment, signal conditioning package does not prestore AR content information, Template Information and correspondence relationship information.
Signal conditioning package 3 and management devices 4 is comprised according to the system of the second embodiment.As in the first embodiment, signal conditioning package 3 is the communication terminals held by user.As in the first embodiment, management devices 4 is server computers of management information process device 3.
First, the functional structure according to the signal conditioning package 3 of the second embodiment will be described.Figure 21 is the functional block diagram of the signal conditioning package 3 according to the second embodiment.Signal conditioning package 3 comprises communication unit 30, control module 31, image-generating unit 32, storage unit 33 and display unit 34.
Communication unit 30 communicates with management devices 4.Such as, information provides request to be sent to management devices 4 by communication unit 30.Information provides request to comprise the Tag ID identified by recognition unit 35 and the positional information calculated by computing unit 36.Information provides request to comprise the user ID, temporal information and the location information that are obtained by GPS etc.The user ID, temporal information and the location information that are obtained by GPS etc. are used for identifying user.
After transmission information provides request, communication unit 30 obtains display information from management devices 4.Display information comprises the AR content information and relevant Template Information thereof selected by management devices 4.
Control module 31 controls the various types of process performed by signal conditioning package 3.Control module 31 comprises recognition unit 35, computing unit 36 and generation unit 37.Recognition unit 35 carries out the process similar with the process carried out according to the recognition unit 15 of the first embodiment.Computing unit 36 carries out the process similar with the process carried out according to the computing unit 16 of the first embodiment.
In addition, after calculating location information, computing unit 36 information generated according to the second embodiment provides request.Then, computing unit 36 controls communication unit 30 and information is provided request to be sent to management devices 4.Generation unit 37 generates display image based on the display information obtained from management devices 4.The generation of display image is identical with the generation of display image in the first embodiment.
Storage unit 33 stores various types of information.But storage unit 33 does not obtain AR content information, Template Information and correspondence relationship information from management devices 4 in advance.Storage unit 33 temporarily stores the display information obtained from management devices 4.Display unit 34 shows display image as according to the display unit 14 of the first embodiment.
Then, the functional structure of management devices 4 will be described.Figure 22 is the functional block diagram of the management devices 4 according to the second embodiment.Management devices 4 comprises communication unit 40, control module 41 and storage unit 42.
Communication unit 40 communicates with signal conditioning package 3.Such as, communication unit 40 receives information from signal conditioning package 3 and provides request.In addition, communication unit 40 will depend on that the display information of positional information is sent to signal conditioning package 3 under the control of control module 41, the position relationship between this positional information indicating user position and mark.
Control module 41 controls the various types of process performed by management devices 4.Control module 41 comprises determining unit 43.Determining unit 43 provides the positional information in request and Tag ID to identify the information that will provide based on the information of being included in.
Such as, determining unit 43 identifies the AR content with matching criteria with reference to correspondence relationship information.In addition, determining unit 43 reads the AR content information relevant to identified AR content and Template Information from storage unit 42.Then, read information is sent to signal conditioning package 3 by controlling communication unit by determining unit 43.
Such as, storage unit 42 stores various types of information, such as AR content information, Template Information and correspondence relationship information.The data structure of the information of these types and identical in the first embodiment.
Then, below by the treatment scheme of descriptor treating apparatus 3 and management devices 4.Figure 23 depicts the process flow diagram of the information providing method according to the second embodiment.
First, the recognition unit 35 of signal conditioning package 3 obtains input picture (operation 11) from image-generating unit 32.Then, recognition unit 35 determines whether recognize references object (operation 12) from input picture.Such as, recognition unit 35 determines whether the object matched with the template of references object comprises in the input image.If do not recognize references object (being no in operation 12), then signal conditioning package 3 ending message provides process.
On the other hand, if recognize references object (operation 12 in be yes), then recognition unit 35 outputs to computing unit 36 with reference to the recognition result of object and identification information.Then, computing unit 36 calculates the positional information (operation 13) of the position relationship be used to indicate between references object and image space based on the image of the references object in input picture.Computing unit 36 generates the information comprising positional information and Tag ID and provides request.Information provides request to be sent to management devices 4 (operation 14) by communication unit 30 under the control of control module 31.
On the other hand, the communication unit 40 of management devices 4 receives information and provides request (operation 21).Then, determining unit 43 provides the positional information in request and Tag ID to select display information (operation 22) based on the information of being included in.Here, with reference to correspondence relationship information, select the AR content with matching criteria, and select AR content information and the Template Information of AR content.Then, the display information comprising AR content information and Template Information is sent to signal conditioning package 3 (operation 23) by communication unit 40 under the control of control module 41.
Then, the communication unit 30 of signal conditioning package 3 receives display information (operation 15).Then, generation unit 37 uses the display information received and the input picture obtained in operation 11 to generate display image (operation 16).Then, display unit 34 shows (operation 17) display image.
As mentioned above, in the present embodiment, the information of having carried out the position relationship depended between image space and mark provides.Owing to have sent the AR content information relevant to the AR content that will show and Template Information from management devices 4, so signal conditioning package 3 does not obtain the AR content information relevant to AR content and Template Information in advance.Therefore, storage area can be reduced.
[modification]
The second embodiment can be changed as described below.Such as, the control module 31 of signal conditioning package 3 only has generation unit 37.The control module 41 of management devices 4 can have the function be equal to the recognition unit 35 of signal conditioning package 3 and computing unit 36.Owing to only expecting that signal conditioning package 3 generates display image, so can reduce processing load in this case.
That is, input picture is sent to management devices 4 and provides request as information by the communication unit 30 of signal conditioning package 3.Then, the recognition unit 35 of management devices 4 and computing unit 36 and determining unit 43 select the display information according to the positional information between image space and mark.Selected display information is sent to signal conditioning package 3 by the communication unit 40 of management devices 4.Then, the display unit 34 of signal conditioning package 3 shows display image based on display information.
The control module 41 of management devices 4 can have the function be equal to the recognition unit 35 of signal conditioning package 3, computing unit 36 and generation unit 37.Owing to only expecting that signal conditioning package 3 generates display image, so can reduce processing load in this case.
[example of hardware configuration]
Example according to the signal conditioning package of embodiment and the hardware configuration of management devices will be described.First, the hardware configuration of the hardware configuration according to the signal conditioning package 1 of the first embodiment and the signal conditioning package 3 according to the second embodiment will be described.The computing machine 300 described in use Figure 24 realizes the signal conditioning package as modification.
Figure 24 depicts the example of the hardware configuration of the signal conditioning package according to embodiment.The signal conditioning package according to embodiment is realized by computing machine 300.Such as, the hardware configuration by describing in Figure 24 realizes the functional block described in Figure 10 and Figure 21.Computing machine 300 comprises such as processor 301, random access memory (RAM) 302, ROM (read-only memory) (ROM) 303, driving arrangement 304, storage medium 305, input interface (input I/F) 306, input equipment 307, output interface (exporting I/F) 308, output device 309, communication interface (communication I/F) 310, camera model 311, acceleration transducer 312, angular-rate sensor 313, display interface (display I/F) 314, display device 315 and bus 316 etc.These hardware componenies are interconnected via bus 316.
Communication interface 310 controls communication via network 3.The communication controlled by communication interface 310 can use wireless communication to carry out access network N via radio base station.Input interface 306 is connected to input equipment 307, and the input signal received from input equipment 307 is sent to processor 301.Output interface 308 is connected to output device 309, and makes output device 309 perform output according to the instruction carrying out self processor 301.
Input equipment 307 sends input signal according to operation.Such as, input equipment 307 is stroke devices, such as, be arranged on the keyboard in computing machine 300 main body or button, or the equipment of indication, such as mouse or touch pad.Output device 309 carrys out output information according to the control of processor 301.Output device 309 is voice-output devices, such as loudspeaker.
Display interface 314 is connected to display device 315.Display interface 314 makes display device 315 displays image information, and this image information is written to display buffer set in display interface 314 by processor 301.Display device 315 carrys out output information according to the control of processor 301.Image output device, such as display or transparent display, be used as display device 315.
When using transparent display, the appropriate location controlling to make to be presented at by the projected image of AR content in such as transparent display can be carried out, and the projected image of AR content and seizure image not synthesized.This makes user can obtain real space and AR content visually-perceptible coordinated with each other.In addition, input equipment, such as touch-screen, be used as input equipment 307 and display device 315.Also in outside, input equipment 307 and display device 315 computing machine 300 be can be connected to, instead of in computing machine 1, input equipment 307 and display device 315 merged.
RAM302 is read/writeable memory equipment, and can be semiconductor memory, such as static RAM (SRAM) (SRAM) or dynamic ram (DRAM), or non-RAM equipment, such as flash memory.ROM303 can be programming ROM (PROM).
At least one during driving arrangement 304 reads the information be stored in storage medium 305 and writes.Storage medium 305 stores the information write by driving arrangement 304.Storage medium 305 is at least one in the storage medium of such as hard disk, solid-state drive (SSD), compact disk (CD), digital versatile disc (DVD), Blu-ray Disc etc.In addition, such as, computing machine 300 comprises the driving arrangement 304 corresponding with the type of the storage medium 305 in computing machine 300.
Such as, the camera model 311 comprising imageing sensor reads the value measured by imageing sensor, and this value is written to the frame buffer being used for input picture in camera model 311.The acceleration being applied to acceleration transducer 312 measured by acceleration transducer 312.The angular velocity of the operation of angular-rate sensor 313 measured angular speed pickup 313.
The program be stored in ROM303 and storage medium 305 is loaded into RAM302 by processor 301, and processes according to the code in loaded program.Such as, by making processor 301 control another hardware component based on the information provision procedure described in Figure 20 and Figure 23 (it can be a part for AR control program), the function of control module 11 and control module 31 is realized.By making processor 301 perform data communication by controlling communication interface 310 and be stored in storage medium 305 by received data, realize the function of communication unit 10 and communication unit 30.
By making ROM303 and storage medium 305 storage program file and data file or passing through to use RAM302 as the perform region of processor 301, realize the function of storage unit 13 and storage unit 33.Such as, AR content information, Template Information and correspondence relationship information etc. are stored in RAM302.
By making camera model 311 view data be written to the frame buffer for input picture and make processor 301 reads image data from the frame buffer being used for input picture, realizing the function of image-generating unit 12 and image-generating unit 32.In monitoring mode, image data in parallel is written to for display device 315 display buffer and be used for the frame buffer of input picture.
By the view data generated by processor 301 being written to the display buffer that is included in display interface 314 and making the view data in display device 315 pairs of display buffers show, realize the function of display unit 14 and display unit 34.
Figure 25 depicts the example of the structure of the program of operation in computing machine 300.Operating system (OS) 502 is run on the computer 300.When processor 301 controls and manages the hardware 501 in accordance with the code of OS502, hardware 501 carries out the process undertaken by application program (AP) 504 and middleware (MW) 503.
In computing machine 300, program (such as OS502, MW503 and AP504) is loaded into such as RAM302, and is then performed by processor 301.The AR control program comprising information provision procedure is according to the present embodiment such as MW503, the program called from AP504.Or, comprise the AR control program of information provision procedure as AP504, the program realizing AR function.AR control program is stored in storage medium 305.The storage medium 305 only storing information provision procedure according to the present embodiment or store comprise the storage medium 305 of the AR control program of information provision procedure can distribution liftoff with the body portion of computing machine 300.
Then, the hardware configuration of the management devices 2 according to the first embodiment and the management devices 4 according to the second embodiment will be described.Figure 26 depicts the example of the hardware configuration of the management devices according to embodiment.Computing machine 400 is used to realize management devices 2 and management devices 4.In addition, the computing machine 400 described in Figure 26 also can be used to realize being represented as the management devices of modification.
Such as, the hardware configuration by describing in Figure 26 realizes the functional block described in Figure 19 and Figure 22.Computing machine 400 comprises such as processor 401, random access memory (RAM) 402, ROM (read-only memory) (ROM) 403, driving arrangement 404, storage medium 405, input interface (input I/F) 406, input equipment 407, output interface (exporting I/F) 408, output device 409, communication interface (communication I/F) 410, storage area network (SAN) interface (SAN I/F), bus 412 etc.These hardware componenies are interconnected via bus 412.
Such as, processor 401 is the hardware identical with processor 301.Such as, RAM402 is the hardware identical with RAM302.Such as, ROM403 is the hardware identical with ROM303.Such as, driving arrangement 404 is the hardware identical with driving arrangement 304.Such as, storage medium 405 is the hardware identical with storage medium 305.Such as, input interface (input I/F) 406 is the hardware identical with input interface 306.Such as, input equipment 407 is the hardware identical with input equipment 307.
Such as, output interface (exporting I/F) 408 is the hardware identical with output interface 308.Such as, output device 409 is the hardware identical with output device 309.Such as, communication interface (communication I/F) 410 is the hardware identical with communication interface 310.Storage area network (SAN) interface (SAN I/F) is the interface for computing machine 400 being connected to SAN, and it comprises host bus adaptor (HBA).
The program be stored in ROM403 and storage medium 405 is loaded into RAM402 by processor 401, and carries out the process of control module 21 and control module 41 according to the code in loaded program.Now, RAM402 is used as the perform region of processor 401.Supervisory routine comprises providing to the information in management devices 2 and management devices 4 and processes relevant information provision procedure.
By making ROM403 and storage medium 405 storage program file and data file or passing through to use RAM402 as the perform region of processor 401, realize the function of storage unit 22 and storage unit 42.By making processor 401 carry out communication process by controlling communication interface 410, realize the function of communication unit 20 and communication unit 40.

Claims (15)

1. a signal conditioning package, comprising:
Computing unit, described computing unit is configured to carry out calculating location information based on the profile of the references object from input picture identification, and described positional information indicates the position relationship between the image space of described references object and described input picture; And
Determining unit, described determining unit is configured to based on described positional information, shows data select at least one to show data from many that are associated with described references object.
2. signal conditioning package according to claim 1, wherein,
Described references object has known form, and
Described computing unit calculates described positional information based on comparing between described profile with described known form.
3. the signal conditioning package according in claim 1 and 2, also comprises:
Storage unit, described storage unit is configured to store correspondence relationship information, and the every bar display data in described many display data are associated with each condition in the condition about described positional information by described correspondence relationship information,
Wherein, described determining unit selects at least one display data described based on described correspondence relationship information and described positional information.
4. signal conditioning package according to claim 3, wherein, each condition in described condition is restricted to the region of position, and
Described determining unit identification comprises a certain condition of described positional information, and select corresponding with described a certain condition described at least one show data.
5. signal conditioning package according to claim 4, wherein, described region radially sets about described references object.
6. signal conditioning package according to claim 4, wherein, described region is by indicating with the Distance geometry direction of described references object.
7. signal conditioning package according to any one of claim 1 to 6, also comprises:
Generation unit, described generation unit is configured to image data generating, and described view data is on described input picture, described in a certain position display preset about described references object, at least one shows data.
8. signal conditioning package according to any one of claim 1 to 7, wherein,
Described computing unit calculates about described references object, corresponding with described image space in three dimensions three-dimensional position, as described positional information.
9. the display packing performed by computing machine, comprising:
Profile based on the references object from input picture identification carrys out calculating location information, and described positional information indicates the position relationship between the image space of described references object and described input picture; And
Based on described positional information, show data from many that are associated with described references object and select at least one to show data.
10. display packing according to claim 9, wherein, based on described references object described profile with compare to come calculating location information between known form.
11. display packings according in claim 9 and 10, wherein,
Select at least one display data described based on correspondence relationship information and described positional information, the every bar display data in described many display data are associated with each condition in the condition about described positional information by described correspondence relationship information.
12. display packings according to claim 11, wherein, each condition in described condition is restricted to the region of position, and
Described in selecting accordingly with a certain condition comprising described positional information, at least one shows data.
13. display packings according to claim 12, wherein, radially set described region about described references object.
14. display packings according to any one of claim 9 to 13, also comprise:
Image data generating, described view data is on described input picture, described in a certain position display preset about described references object, at least one shows data.
15. display packings according to any one of claim 9 to 14, wherein,
Described positional information is about described references object, three-dimensional position corresponding with described image space in three dimensions.
CN201410341965.7A 2013-07-24 2014-07-17 Information processing unit and display methods Active CN104346612B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-153191 2013-07-24
JP2013153191A JP6225538B2 (en) 2013-07-24 2013-07-24 Information processing apparatus, system, information providing method, and information providing program

Publications (2)

Publication Number Publication Date
CN104346612A true CN104346612A (en) 2015-02-11
CN104346612B CN104346612B (en) 2018-12-25

Family

ID=51162472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410341965.7A Active CN104346612B (en) 2013-07-24 2014-07-17 Information processing unit and display methods

Country Status (5)

Country Link
US (1) US20150029219A1 (en)
EP (1) EP2830022A3 (en)
JP (1) JP6225538B2 (en)
CN (1) CN104346612B (en)
AU (1) AU2014203449B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106445088A (en) * 2015-08-04 2017-02-22 上海宜维计算机科技有限公司 Reality augmenting method and system
CN107481323A (en) * 2016-06-08 2017-12-15 创意点子数位股份有限公司 Mix the interactive approach and its system in real border
WO2018036408A1 (en) * 2016-08-24 2018-03-01 丰唐物联技术(深圳)有限公司 Interaction method and system based on augmented reality
CN109348209A (en) * 2018-10-11 2019-02-15 北京灵犀微光科技有限公司 Augmented reality display device and vision calibration method
CN112651270A (en) * 2019-10-12 2021-04-13 北京七鑫易维信息技术有限公司 Gaze information determination method and apparatus, terminal device and display object

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942049B (en) * 2014-04-14 2018-09-07 百度在线网络技术(北京)有限公司 Implementation method, client terminal device and the server of augmented reality
JP6524706B2 (en) * 2015-02-27 2019-06-05 富士通株式会社 Display control method, display control program, and information processing apparatus
JP6505836B2 (en) 2015-05-26 2019-04-24 三菱重工業株式会社 Driving support device and driving support method
JP2017054185A (en) * 2015-09-07 2017-03-16 株式会社東芝 Information processor, information processing method, and information processing program
US10740614B2 (en) 2016-04-14 2020-08-11 Nec Corporation Information processing device, information processing method, and program storing medium
JP2018005091A (en) * 2016-07-06 2018-01-11 富士通株式会社 Display control program, display control method and display controller
WO2018123022A1 (en) * 2016-12-28 2018-07-05 株式会社メガハウス Computer program, display device, head worn display device, and marker
US20230021345A1 (en) * 2019-12-26 2023-01-26 Nec Corporation Information processing device, control method, and storage medium
KR20210107409A (en) 2020-02-24 2021-09-01 삼성전자주식회사 Method and apparatus for transmitting video content using edge computing service

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101379369A (en) * 2006-01-09 2009-03-04 诺基亚公司 Displaying network objects in mobile devices based on geolocation
WO2012123623A1 (en) * 2011-03-16 2012-09-20 Nokia Corporation Method and apparatus for displaying interactive preview information in a location-based user interface
JP2013059573A (en) * 2011-09-14 2013-04-04 Namco Bandai Games Inc Program, information memory medium and game apparatus
WO2013061504A1 (en) * 2011-10-27 2013-05-02 Sony Corporation Image processing apparatus, image processing method, and program

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2381644A (en) * 2001-10-31 2003-05-07 Cambridge Display Tech Ltd Display drivers
EP1751499B1 (en) * 2004-06-03 2012-04-04 Making Virtual Solid, L.L.C. En-route navigation display method and apparatus using head-up display
WO2005119539A1 (en) 2004-06-04 2005-12-15 Mitsubishi Denki Kabushiki Kaisha Certificate issuance server and certification system for certifying operating environment
US20080310686A1 (en) 2007-06-15 2008-12-18 Martin Kretz Digital camera system and method of storing image data
US9204050B2 (en) * 2008-12-25 2015-12-01 Panasonic Intellectual Property Management Co., Ltd. Information displaying apparatus and information displaying method
JP2011244058A (en) * 2010-05-14 2011-12-01 Sony Corp Information processing device, information processing system, and program
JP5255595B2 (en) * 2010-05-17 2013-08-07 株式会社エヌ・ティ・ティ・ドコモ Terminal location specifying system and terminal location specifying method
JP2012215989A (en) 2011-03-31 2012-11-08 Toppan Printing Co Ltd Augmented reality display method
JP6121647B2 (en) * 2011-11-11 2017-04-26 ソニー株式会社 Information processing apparatus, information processing method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101379369A (en) * 2006-01-09 2009-03-04 诺基亚公司 Displaying network objects in mobile devices based on geolocation
WO2012123623A1 (en) * 2011-03-16 2012-09-20 Nokia Corporation Method and apparatus for displaying interactive preview information in a location-based user interface
JP2013059573A (en) * 2011-09-14 2013-04-04 Namco Bandai Games Inc Program, information memory medium and game apparatus
WO2013061504A1 (en) * 2011-10-27 2013-05-02 Sony Corporation Image processing apparatus, image processing method, and program

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106445088A (en) * 2015-08-04 2017-02-22 上海宜维计算机科技有限公司 Reality augmenting method and system
CN106445088B (en) * 2015-08-04 2020-05-22 上海宜维计算机科技有限公司 Method and system for reality augmentation
CN107481323A (en) * 2016-06-08 2017-12-15 创意点子数位股份有限公司 Mix the interactive approach and its system in real border
WO2018036408A1 (en) * 2016-08-24 2018-03-01 丰唐物联技术(深圳)有限公司 Interaction method and system based on augmented reality
CN109348209A (en) * 2018-10-11 2019-02-15 北京灵犀微光科技有限公司 Augmented reality display device and vision calibration method
CN112651270A (en) * 2019-10-12 2021-04-13 北京七鑫易维信息技术有限公司 Gaze information determination method and apparatus, terminal device and display object

Also Published As

Publication number Publication date
AU2014203449B2 (en) 2015-07-30
EP2830022A3 (en) 2015-02-18
AU2014203449A1 (en) 2015-02-12
US20150029219A1 (en) 2015-01-29
CN104346612B (en) 2018-12-25
JP2015022737A (en) 2015-02-02
JP6225538B2 (en) 2017-11-08
EP2830022A2 (en) 2015-01-28

Similar Documents

Publication Publication Date Title
CN104346612A (en) Information processing apparatus, and displaying method
AU2014277858B2 (en) System and method for controlling a display
JP6171671B2 (en) Information processing apparatus, position specifying method, and position specifying program
JP6314394B2 (en) Information processing apparatus, setting method, setting program, system, and management apparatus
US10074217B2 (en) Position identification method and system
KR102051889B1 (en) Method and system for implementing 3d augmented reality based on 2d data in smart glass
JP5991423B2 (en) Display device, display method, display program, and position setting system
CN104574267A (en) Guiding method and information processing apparatus
EP2866088B1 (en) Information processing apparatus and method
JP6500355B2 (en) Display device, display program, and display method
JP2015005181A (en) Information processor, determination method and determination program
JP2019175165A (en) Object tracking device, object tracking method, and object tracking program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant