WO2022123922A1 - Information processing system - Google Patents

Information processing system Download PDF

Info

Publication number
WO2022123922A1
WO2022123922A1 PCT/JP2021/038963 JP2021038963W WO2022123922A1 WO 2022123922 A1 WO2022123922 A1 WO 2022123922A1 JP 2021038963 W JP2021038963 W JP 2021038963W WO 2022123922 A1 WO2022123922 A1 WO 2022123922A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
user
content
private
information
Prior art date
Application number
PCT/JP2021/038963
Other languages
French (fr)
Japanese (ja)
Inventor
泰士 山本
宏樹 林
真治 木村
幹生 岩村
江利子 大関
修 後藤
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Priority to JP2022568088A priority Critical patent/JPWO2022123922A1/ja
Publication of WO2022123922A1 publication Critical patent/WO2022123922A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • One aspect of the present invention relates to an information processing system.
  • Patent Document 1 describes that private content owned by the communication terminal and public content not owned by the communication terminal are simultaneously displayed on the same screen on the communication terminal of the user.
  • the user in a system that simultaneously displays private content and public content on the same screen as described above, it is preferable for the user to mainly display either content according to the surrounding situation of the user, for example. In some cases, it is easy to process the content, but usually, the display area of each content is fixed, and there may be a situation where it is not easy for the user to process the content.
  • One aspect of the present invention has been made in view of the above circumstances, and an object thereof is to improve the processability of contents.
  • the information processing system acquires a storage unit that stores content information in which an object in real space and AR content are associated with each other, and peripheral space information including at least an image of real space around the user. It includes an acquisition unit and an area determination unit that determines a display area of content including AR content based on peripheral spatial information.
  • the display area of the content is determined based on the peripheral space information including at least the image of the real space around the user.
  • the display area changes according to the change of the peripheral spatial information (that is, the spatial information around the user changes).
  • each area can be adjusted according to the surrounding spatial information, and the ease of content processing by the user can be improved.
  • the ease of processing the content can be improved.
  • the ease of processing the content can be improved.
  • FIG. 1 is a diagram illustrating an outline of the information processing system 1 according to the present embodiment.
  • the image generated by the image generation server 50 is displayed on the communication terminal 10. That is, the user visually recognizes the image generated by the image generation server 50 on the screen of the communication terminal 10.
  • 1 (a) to 1 (c) show an example of an image generated by the image generation server 50 and displayed on the screen of the communication terminal 10.
  • the image generation server 50 generates an image in which AR content or the like is superimposed on the image captured by the communication terminal 10.
  • the information processing system 1 has a public area in which AR content is depicted (superimposed) in an image in real space based on peripheral space information, and a private area in which private content related to a user carrying a communication terminal 10 is depicted (superimposed). Is determined, and an image (depicted image) in which the content is depicted in each area is generated.
  • the public area is an area for depicting AR content associated with an object in real space. AR content in the public domain is, for example, content related to signs, advertisements, art works, and various entertainment works.
  • the private area is an area that describes the user's private content (private content).
  • the private content is, for example, content such as a search screen, a game, an e-mail, an SNS, a map, etc. using the Internet.
  • the public area, the private area, and the non-depiction area are determined based on the peripheral space information. That is, in the information processing system 1, the sizes of the public area, the private area, and the non-depiction area change according to the peripheral space information.
  • the non-depiction area is an area that secures the user's field of view without depicting the content.
  • FIG. 1A At least the public area A1 and the private area A2 are set in the image (screen of the communication terminal 10) displayed on the communication terminal 10.
  • FIG. 1B a plurality of AR contents F associated with an object in the real space are depicted in the public area A1.
  • the AR content F as a sign associated with the building related to the station
  • the AR content F as the sign associated with the building related to the supermarket
  • the AR content F as an advertisement associated with a building related to the sale of clothing are depicted.
  • the content M related to the map and the content S related to the SNS are depicted in the private area A2.
  • the area where there are many buildings depicting AR content (upper part in FIG. 1B) is designated as the public area A1, and the area where there are few buildings depicting AR content (FIG. 1 (c)).
  • the lower part) is the private area A2.
  • FIG. 1B it is specified that the user is walking by considering, for example, information on the moving speed of the user, and the areas of the public area A1 and the private area A2 are relatively large. It is set and the non-depicted area is not set. This is because a non-depicted area for ensuring the user's field of view is unnecessary when the user is walking.
  • the AR content F as a sign associated with the building related to the restaurant and the AR content F as the sign associated with the building related to the cafe are Each is depicted.
  • the content M related to the map is depicted in the private area A2.
  • the non-depiction area A3 is set to be small. In this case, in consideration of the surrounding space information, the area where there are many buildings depicting AR content (upper part in FIG.
  • FIG. 1B is designated as the public area A1, and the area where there are few buildings depicting AR content (FIG. 1 (c)).
  • the lower part is set as the private area A2
  • the area where the most visibility is desired during driving is set as the non-depiction area A3.
  • the public area A1, the private area A2, and the non-depiction area A3 change according to the change of the peripheral spatial information and the like (that is, the spatial information around the user changes). It will be.
  • the public area A1 is enlarged and the user is more private than the AR contents related to the surrounding buildings or the like. If you want to display the content in the center, increase the private area A2, and if you should give priority to ensuring the user's view, increase the non-depiction area A3, etc., depending on the surrounding spatial information. While ensuring the ease of content processing by the user, it is possible to prevent the user's visibility from becoming poor.
  • FIG. 2 is a block diagram showing a functional configuration of the information processing system 1 according to the present embodiment.
  • the information processing system 1 includes a communication terminal 10, a positioning server 30, and an image generation server 50. Although only one communication terminal 10, a positioning server 30, and an image generation server 50 are shown in FIG. 2, the information processing system 1 may be configured to include a plurality of each.
  • the communication terminal 10 is a communicable terminal having a display, and is, for example, a smartphone, a tablet-type terminal, a PC, a glasses-type wearable terminal, or the like.
  • the communication terminal 10 has a camera and is configured to be capable of capturing an image of the surrounding area. For example, when an application related to content depiction is started, the communication terminal 10 starts imaging by a camera to be mounted.
  • the communication terminal 10 has a sensor such as an acceleration sensor, and is configured to be able to derive a moving speed (moving speed of a user carrying the communication terminal 10). Further, the communication terminal 10 is configured to be able to acquire the user's gaze point (the point at which the user gazes through the display of the eyeglass-type wearable terminal).
  • the communication terminal 10 transmits an image (captured image) captured by the camera to the positioning server 30.
  • the communication terminal 10 acquires a positioning result based on the captured image from the positioning server 30.
  • the communication terminal 10 transmits an captured image, a user's position (positioning result), a user's gaze point, and a user's movement speed to the image generation server 50.
  • the positioning server 30 has a storage unit 31 and a positioning unit 32.
  • the storage unit 31 stores the map data 300.
  • the feature amount for example, the luminance direction vector
  • the map data 300 is, for example, a 3D point cloud.
  • the map data 300 is preliminarily imaged by a stereo camera (not shown) or the like capable of simultaneously capturing an object from a plurality of different directions, and is generated based on a large number of captured images.
  • the feature point is a point that is prominently detected in the image, and is, for example, a point where the brightness (intensity) is large (or small) as compared with other regions.
  • the global position information of the feature point is the global position information set in association with the feature point, and is the global position information in the real world about the region indicated by the feature point in the image. It should be noted that the association of the global position information with each feature point can be performed by a conventionally known method.
  • the storage unit 31 stores three-dimensional global position information as the global position information of the feature points of the map data 300.
  • the storage unit 31 stores, for example, the latitude, longitude, and height of the feature points as three-dimensional global position information of the feature points.
  • the positioning unit 32 is based on the captured image captured by the communication terminal 10 and the map data 300 stored in the storage unit 31, and the global position information (three-dimensional) of the communication terminal 10 at the time of imaging by the communication terminal 10. Position information) is estimated. Specifically, the positioning unit 32 matches the feature points of the map data 300 with the feature points of the captured image captured by the communication terminal 10, and obtains a region of the map data 300 corresponding to the captured captured image. Identify. Then, the positioning unit 32 estimates the imaging position of the captured image (that is, the global position information of the communication terminal 10 at the time of imaging) based on the global position information associated with the feature points of the map data 300 related to the specified area. .. The positioning unit 32 transmits the positioning result to the communication terminal 10.
  • the positioning result includes information on the direction estimated from the captured image (direction in the three-dimensional coordinates of roll, pitch, and yaw) in addition to the global position information. Further, the positioning unit 32 may acquire global position information based on the captured images captured by the communication terminal 10 at a fixed cycle, or may be captured by the communication terminal 10 at the timing of receiving an instruction from the user. Global position information may be acquired based on the captured image. The positioning result may be acquired without relying on the positioning server 30. That is, the communication terminal 10 may acquire the user's position (positioning result) by a conventionally known positioning method regardless of the captured image.
  • the image generation server 50 includes an acquisition unit 51, an area determination unit 52, a first specific unit 53, a second specific unit 54, an image generation unit 55, an output unit 56, and a storage unit 57. ing.
  • the storage unit 57 stores content data 500 (content information) in which an object in real space and AR content are associated with each other. That is, the storage unit 57 stores the information of the AR content to be drawn in the public area A1 in association with the object in the real space.
  • the storage unit 57 may store, for example, the content (display content) of the AR content, the size of the AR content, and the like in association with the shape and position information of an object (building, etc.) in the real space.
  • the acquisition unit 51 acquires peripheral space information including at least an image of the real space around the user.
  • the acquisition unit 51 acquires the captured image (image of the real space around the user) transmitted from the communication terminal 10 and the position (positioning result) of the user corresponding to the captured image as the peripheral space information.
  • the acquisition unit 51 may acquire the metadata of the map showing the shape of the object as the peripheral space information. Further, the acquisition unit 51 may further acquire information indicating the state of the user.
  • the acquisition unit 51 acquires the user's viewpoint (gaze point) and movement speed transmitted from the communication terminal 10 as information indicating the state of the user.
  • the acquisition unit 51 outputs each acquired information to the area determination unit 52.
  • the area determination unit 52 describes a public area for describing the AR content of the content data 500 in a real space image (captured image) based on the peripheral space information acquired by the acquisition unit 51, and a private area for describing the private content related to the user. A region and a non-depicted region, which is an region for ensuring the user's view, are determined. Specifically, the area determination unit 52 may determine a public area, a private area, and a non-depiction area based on the type of the surrounding building specified from the image (captured image) included in the surrounding space information. good.
  • the area determination unit 52 may specify a building area such as a traffic light or a traffic sign that the user needs to clearly see from the captured image, and may use the area as a non-depiction area. Further, the area determination unit 52 sets the area of the building type in which the AR content of the content data 500 is likely to be displayed as a public area, and the area of the building type in which the AR content of the content data 500 is unlikely to be displayed is private. It may be an area.
  • the area determination unit 52 may determine a public area, a private area, and a non-depiction area based on the shape of the surrounding building specified from the surrounding space information.
  • the area determination unit 52 is a place where many people come and go from the captured image, such as the shape of the entrance of the building (a place where there is a high possibility of collision with the user and the user needs to clearly see it).
  • a region showing the shape of the above may be specified, and the region may be set as a non-depiction region.
  • the area determination unit 52 sets the area of the building shape in which the AR content of the content data 500 is likely to be displayed as a public area, and the area of the building shape in which the AR content of the content data 500 is unlikely to be displayed is private. It may be an area.
  • the area determination unit 52 may determine the public area, the private area, and the non-depiction area in consideration of the information indicating the state of the user acquired by the acquisition unit 51. Specifically, the area determination unit 52 may determine each area based on the user's gaze point. For example, the area determination unit 52 may determine that the user's gaze point and the area around the gaze point are the areas that the user wants to see, and may set them as non-depiction areas. Alternatively, the area determination unit 52 may determine that the user's gaze point and the area around the gaze point are areas in which the AR content related to each building or the like can be effectively shown to the user, and may be set as a public area.
  • the area determination unit 52 may determine each area based on the movement mode of the user specified from the movement speed of the user. For example, the area determination unit 52 determines that it is necessary to sufficiently secure the user's field of view when the user's movement mode is specified to be a car (the user is driving) from the user's movement speed. However, the non-depiction area may be enlarged. Further, when the movement mode of the user is specified to be walking (the user is walking) from the movement speed of the user, the area determination unit 52 may make the public area and the private area relatively large. ..
  • the area determination unit 52 determines from the movement speed of the user that it is not necessary to provide a non-depiction area from the viewpoint of safety or the like when the user's movement mode is stopped, which indicates that the user is not moving. , The non-depicted area may be determined without.
  • 3 to 5 are diagrams illustrating an example of area determination processing by the area determination unit 52 of the image generation server 50.
  • the area determination unit 52 identifies that the user is driving from the moving speed of the user. In this case, the area determination unit 52 sets the non-depiction area A3 to be relatively large (for example, larger than the public area A1 and the private area A2). Further, the area determination unit 52 determines the size and position of the public area A1, the private area A2, and the non-depiction area A3 based on the type and shape of the surrounding buildings specified from the surrounding space information. Specifically, as shown in FIG. 3, the area determination unit 52 is a area such as a sign or a signal necessary for driving, an area near a road, a pedestrian jumping out, or a car from a building (parking lot, etc.).
  • the area of the sidewalk where is expected to appear is determined as the non-depiction area. Further, the area determination unit 52 sets the area of the building shape in which the AR content of the content data 500 is likely to be displayed as the public area A1, and the area (lower area) in which the AR content of the content data 500 is not likely to be displayed. ) Is the private area A2.
  • the area determination unit 52 specifies that the user is walking based on the moving speed of the user.
  • the area determination unit 52 may set the non-depiction area A3 smaller than the state shown in FIG.
  • the area determination unit 52 determines the entrance portion of the building, the area near the sidewalk, and the area where the vehicle may enter the sidewalk from the roadway, based on the type and shape of the surrounding building specified from the surrounding space information.
  • the non-depiction area A3 is determined.
  • the area determination unit 52 sets the area of the building shape in which the AR content of the content data 500 is likely to be displayed as the public area A1, and the area (lower area) in which the AR content of the content data 500 is not likely to be displayed. ) Is the private area A2.
  • the area determination unit 52 specifies that the user is stopped based on the moving speed of the user. In this case, the area determination unit 52 may determine the non-depiction area A3 without it. Further, since the area determination unit 52 is stopped, it is determined that the user is watching the communication terminal 10 (operating the private content) because the user is in a meeting or the like, and the private area A2 is relatively large. It may be (larger than the public area A1). Also in this case, for example, the area determination unit 52 may set the area of the object having many associated AR contents (public contents) as the public area A1.
  • the public area and private area are set as follows, for example, according to the state of the user. If the user seems to be waiting, the public area is normal, the private area is large, and if the user seems to be waiting for a signal, the public area is large, the private area is normal, and the user is walking. If you think it is, make the public area larger, make the private area smaller (or none), and if you think the user is on a train or bus, make the public area smaller (or none), and make the private area smaller. Larger, if the user seems to be driving, the public area may be set to be smaller (or none) and the private area may be set to be smaller (or none).
  • the first specifying unit 53 identifies the AR content associated with the object in the real space included in the public area by referring to the content data 500.
  • the first specific unit 53 narrows down the AR content in the content data 500 according to, for example, the position of the user and the shape of the object.
  • the second specifying unit 54 specifies the private content depicted in the private area based on the instruction from the user.
  • the second specifying unit 54 specifies, for example, an application designated by the user on the display of the communication terminal 10 as private content.
  • the image generation unit 55 generates a depiction image in which the AR content specified by the first specific unit 53 is depicted in the public area and the private content specified by the second specific unit 54 is depicted in the private area.
  • the image generation unit 55 may change the depiction mode of the AR content in consideration of the distance to the AR content, the content size, the movement speed of the user, and the like.
  • the image generation unit 55 may determine the depiction mode of the AR content by, for example, the following formula.
  • w1, w2, and w3 are weights that change according to the screen size of the communication terminal 10, the visual acuity of the user, and the like.
  • Depiction mode (w1 x distance) x (w2 x content size) x (w3 x speed)
  • the output unit 56 outputs a depiction image generated by the image generation unit 55.
  • the output unit 56 displays a depiction image on the display of the communication terminal 10.
  • FIG. 6 is a flowchart showing a process executed by the image generation server 50.
  • the image generation server 50 first acquires peripheral space information and information indicating a user's state (step S1). Subsequently, the image generation server 50 determines the sizes and positions of the public area, the private area, and the non-depicted area based on the acquired information (step S2).
  • the image generation server 50 specifies the AR content to be displayed in the public area (step S3). Further, the image generation server 50 specifies the private content to be displayed in the private area (step S4).
  • the image generation server 50 generates a depiction image in which the AR content is depicted in the public area and the private content is depicted in the private area (step S5), and outputs the depiction image (step S6).
  • the information processing system 1 acquires the storage unit 57 that stores the content data 500 in which the object in the real space and the AR content are associated with each other, and the peripheral space information including at least the image of the real space around the user.
  • the acquisition unit 51 is provided, and the area determination unit 52 that determines the display area of the content including the AR content based on the peripheral space information is provided.
  • the display area of the content is determined based on the peripheral space information including at least the image of the real space around the user.
  • the display area changes according to the change of the peripheral space information (that is, the change of the spatial information around the user).
  • each area can be adjusted according to the surrounding spatial information, and the ease of content processing by the user can be improved.
  • the ease of processing the contents can be improved. Further, by improving the ease of processing, unnecessary processing by the user is reduced, so that the processing load as a system can be reduced.
  • the area determination unit 52 determines a public area for describing the AR content of the content data 500 in the image in the real space and a private area for describing the private content related to the user, and the information processing system 1 refers to the content data 500.
  • the first specifying unit 53 that specifies the AR content associated with the object in the real space included in the public area
  • the second specifying the private content that is drawn in the private area based on the instruction from the user.
  • Image generation that generates a depiction image in which the specific unit 54 and the AR content specified by the first specific unit 53 are depicted in the public area, and the private content specified by the second specific unit 54 is depicted in the private area.
  • a unit 55 and an output unit 56 that outputs a depiction image are provided.
  • a private area in which the content is described is determined, and a depiction image in which the content corresponding to each area is described is generated and output in each area.
  • the public area and the private area change according to the change of the peripheral space information (that is, the change of the spatial information around the user).
  • the ease of processing the content can be improved.
  • the area determination unit 52 may determine an area that secures the user's field of view as a non-depiction area that does not depict the content, based on the peripheral space information. According to such a configuration, since the content is not drawn in the area where the user's field of view should be secured, it is possible to prevent the user's field of view from becoming poor due to the drawing of the content.
  • the area determination unit 52 may determine a public area, a private area, and a non-depiction area based on the type of the surrounding building specified from the surrounding space information. According to such a configuration, each area can be determined more appropriately according to the type of building. That is, for example, a building area that needs to be clearly seen by the user, such as a traffic light or a traffic sign, can be made into a non-depiction area, and the ease of processing the content and the securing of the user's field of view are more appropriately implemented. be able to.
  • the area determination unit 52 may determine a public area, a private area, and a non-depiction area based on the shape of the surrounding building specified from the surrounding space information. According to such a configuration, each area can be determined more appropriately according to the shape of the building. That is, it is possible to make a non-depiction area for a place where many people come and go (a place where there is a high possibility of collision with the user and the user needs to clearly see) such as an entrance of a building. , Content processing ease and user visibility can be secured more appropriately.
  • the acquisition unit 51 may further acquire information indicating the user's state, and the area determination unit 52 may determine the public area, the private area, and the non-depiction area in consideration of the user's state. According to such a configuration, not only the peripheral space information but also the state of the user is taken into consideration, so that it is possible to determine a more appropriate area from the viewpoint of the ease of content processing of the user.
  • the acquisition unit 51 may acquire information indicating the state of the user including at least the user's viewpoint, and the area determination unit 52 may determine the public area, the private area, and the non-depiction area based on the user's viewpoint. .. According to such a configuration, it is possible to determine each area so as to further improve the user's content processing ease in consideration of where the user is visually recognizing.
  • the acquisition unit 51 acquires information indicating the state of the user including the movement speed of the user
  • the area determination unit 52 acquires the public area, the private area, and the area based on the movement mode of the user specified from the movement speed of the user.
  • the non-depicted area may be determined. According to such a configuration, each area can be appropriately determined according to the movement mode of the user, for example, the non-depiction area is increased during driving and the public area is increased during walking.
  • the area determination unit 52 may determine the non-depiction area without the non-depiction area when the movement mode of the user is stopped indicating that the user is not moving. Since it is not necessary to provide a non-depicted area for safety reasons during the stop, each area can be determined more appropriately by controlling in this way.
  • the communication terminal 10, the positioning server 30, and the image generation server 50 physically include a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like. It may be configured as a device.
  • the word “device” can be read as a circuit, device, unit, etc., and the hardware configurations of the communication terminal 10, the positioning server 30, and the image generation server 50 are shown in the figure. It may be configured to include one or more devices, or it may be configured not to include some devices.
  • the processor 1001 For each function of the communication terminal 10, the positioning server 30, and the image generation server 50, the processor 1001 performs calculations and communicates by loading predetermined software (programs) on hardware such as the processor 1001 and the memory 1002. It is realized by controlling communication by the device 1004 and reading and / or writing of data in the memory 1002 and the storage 1003.
  • the processor 1001 operates, for example, an operating system to control the entire computer.
  • the processor 1001 may be configured by a central processing unit (CPU: Central Processing Unit) including an interface with peripheral devices, a control device, an arithmetic unit, a register, and the like.
  • CPU Central Processing Unit
  • the control function of the area determination unit 52 of the image generation server 50 may be realized by the processor 1001.
  • the processor 1001 reads a program (program code), a software module and data from the storage 1003 and / or the communication device 1004 into the memory 1002, and executes various processes according to these.
  • program program code
  • a program that causes a computer to execute at least a part of the operations described in the above-described embodiment is used.
  • control function of the area determination unit 52 of the image generation server 50 or the like may be realized by a control program stored in the memory 1002 and operated by the processor 1001, or may be realized in the same manner for other functional blocks. .. Although it has been described that the various processes described above are executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001. Processor 1001 may be mounted on one or more chips. The program may be transmitted from the network via a telecommunication line.
  • the memory 1002 is a computer-readable recording medium, and is composed of at least one such as a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), and a RAM (Random Access Memory). May be done.
  • the memory 1002 may be referred to as a register, a cache, a main memory (main storage device), or the like.
  • the memory 1002 can store a program (program code), a software module, or the like that can be executed to implement the wireless communication method according to the embodiment of the present invention.
  • the storage 1003 is a computer-readable recording medium, and is, for example, an optical disk such as a CDROM (Compact Disc ROM), a hard disk drive, a flexible disk, an optical magnetic disk (for example, a compact disk, a digital versatile disk, or a Blu-ray (registration)). It may consist of at least one such as a (trademark) disk), a smart card, a flash memory (eg, a card, stick, key drive), a floppy (registered trademark) disk, a magnetic strip, and the like.
  • the storage 1003 may be referred to as an auxiliary storage device.
  • the storage medium described above may be, for example, a database, server or other suitable medium containing memory 1002 and / or storage 1003.
  • the communication device 1004 is hardware (transmission / reception device) for communicating between computers via a wired and / or wireless network, and is also referred to as, for example, a network device, a network controller, a network card, a communication module, or the like.
  • the input device 1005 is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts an input from the outside.
  • the output device 1006 is an output device (for example, a display, a speaker, an LED lamp, etc.) that outputs to the outside.
  • the input device 1005 and the output device 1006 may have an integrated configuration (for example, a touch panel).
  • each device such as the processor 1001 and the memory 1002 is connected by the bus 1007 for communicating information.
  • the bus 1007 may be composed of a single bus or may be composed of different buses between the devices.
  • the communication terminal 10, the positioning server 30, and the image generation server 50 include a microprocessor, a digital signal processor (DSP: Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), and an FPGA (Field). It may be configured to include hardware such as Programmable Gate Array), and a part or all of each functional block may be realized by the hardware.
  • the processor 1001 may be implemented on at least one of these hardware.
  • Each aspect / embodiment described in the present specification includes LTE (Long Term Evolution), LTE-A (LTE-Advanced), SUPER 3G, IMT-Advanced, 4G, 5G, FRA (Future Radio Access), W-CDMA. (Registered Trademark), GSM (Registered Trademark), CDMA2000, UMB (Ultra Mobile Broad-band), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, UWB (Ultra-Wide) Band), Bluetooth®, and other systems that utilize suitable systems and / or extended next-generation systems based on them may be applied.
  • the input / output information and the like may be saved in a specific location (for example, memory) or may be managed by a management table. Information to be input / output may be overwritten, updated, or added. The output information and the like may be deleted. The input information or the like may be transmitted to another device.
  • the determination may be made by a value represented by 1 bit (0 or 1), by a boolean value (Boolean: true or false), or by comparing numerical values (for example, a predetermined value). It may be done by comparison with the value).
  • the notification of predetermined information (for example, the notification of "being X") is not limited to the explicit one, but is performed implicitly (for example, the notification of the predetermined information is not performed). May be good.
  • software, instructions, etc. may be transmitted and received via a transmission medium.
  • the software may use wired technology such as coaxial cable, fiber optic cable, twist pair and digital subscriber line (DSL) and / or wireless technology such as infrared, wireless and microwave to website, server, or other.
  • wired technology such as coaxial cable, fiber optic cable, twist pair and digital subscriber line (DSL)
  • DSL digital subscriber line
  • wireless technology such as infrared, wireless and microwave to website, server, or other.
  • the information, signals, etc. described herein may be represented using any one of a variety of different techniques.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description are voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. It may be represented by a combination of.
  • information, parameters, etc. described in the present specification may be represented by an absolute value, a relative value from a predetermined value, or another corresponding information. ..
  • the communication terminal may be a mobile communication terminal, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communication device, a remote device, a mobile subscriber station, an access terminal, etc. It may also be referred to as a mobile device, wireless device, remote device, handset, user agent, mobile client, client, or some other suitable term.
  • any reference to that element does not generally limit the quantity or order of those elements. These designations can be used herein as a convenient way to distinguish between two or more elements. Therefore, references to the first and second elements do not mean that only two elements can be adopted there, or that the first element must somehow precede the second element.
  • 1 Information processing system, 51 ... Acquisition unit, 52 ... Area determination unit, 53 ... First specific unit, 54 ... Second specific unit, 55 ... Image generation unit, 56 ... Output unit, 57 ... Storage unit, 500 ... Content data.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This information processing system comprises: a storage unit which stores content data; an acquisition unit which acquires surrounding space information; an area determination unit which determines, on the basis of the surrounding space information, a public area in which AR content is depicted and a private area in which private content is depicted; a first specification unit which specifies the AR content associated with an object in a real space included in the public area by referring to the content data; a second specification unit which specifies the private content depicted in the private area; an image generation unit which generates a depiction image in which the AR content is depicted in the public area and the private content specified by the second specification unit is depicted in the private area; and an output unit which outputs the depiction image.

Description

情報処理システムInformation processing system
 本発明の一態様は、情報処理システムに関する。 One aspect of the present invention relates to an information processing system.
 特許文献1には、ユーザの通信端末上に、通信端末が所有しているプライベートコンテンツと、通信端末が所有していないパブリックコンテンツとを同時に同一画面に表示することが記載されている。 Patent Document 1 describes that private content owned by the communication terminal and public content not owned by the communication terminal are simultaneously displayed on the same screen on the communication terminal of the user.
特開2007-294068号公報Japanese Unexamined Patent Publication No. 2007-294068
 ここで、上述したようなプライベートコンテンツ及びパブリックコンテンツを同時に同一画面に表示するシステムにおいては、例えばユーザの周囲の状況に応じていずれか一方のコンテンツを中心的に表示した方が、ユーザが所望のコンテンツを処理しやすいような場合もあるが、通常は、それぞれのコンテンツの表示領域が固定とされており、ユーザにとってコンテンツの処理が容易でないシチュエーションが考えられる。 Here, in a system that simultaneously displays private content and public content on the same screen as described above, it is preferable for the user to mainly display either content according to the surrounding situation of the user, for example. In some cases, it is easy to process the content, but usually, the display area of each content is fixed, and there may be a situation where it is not easy for the user to process the content.
 本発明の一態様は上記実情に鑑みてなされたものであり、コンテンツの処理容易性を向上させることを目的とする。 One aspect of the present invention has been made in view of the above circumstances, and an object thereof is to improve the processability of contents.
 本発明の一態様に係る情報処理システムは、現実空間の物体とARコンテンツとが対応付けられたコンテンツ情報を記憶する記憶部と、ユーザ周辺の現実空間の画像を少なくとも含む周辺空間情報を取得する取得部と、周辺空間情報に基づき、ARコンテンツを含むコンテンツの表示領域を決定する領域決定部と、を備える。 The information processing system according to one aspect of the present invention acquires a storage unit that stores content information in which an object in real space and AR content are associated with each other, and peripheral space information including at least an image of real space around the user. It includes an acquisition unit and an area determination unit that determines a display area of content including AR content based on peripheral spatial information.
 本発明の一態様に係る情報処理システムでは、ユーザ周辺の現実空間の画像を少なくとも含む周辺空間情報に基づいて、コンテンツの表示領域が決定される。このような情報処理システムによれば、周辺空間情報が変わる(すなわち、ユーザの周辺の空間情報が変わる)ことに応じて、表示領域が変化することとなる。これにより、周辺の空間情報に応じて各領域を調整し、ユーザによるコンテンツ処理のしやすさを向上させることができる。以上のように、本発明の一態様に係る情報処理システムによれば、コンテンツの処理容易性を向上させることができる。 In the information processing system according to one aspect of the present invention, the display area of the content is determined based on the peripheral space information including at least the image of the real space around the user. According to such an information processing system, the display area changes according to the change of the peripheral spatial information (that is, the spatial information around the user changes). As a result, each area can be adjusted according to the surrounding spatial information, and the ease of content processing by the user can be improved. As described above, according to the information processing system according to one aspect of the present invention, the ease of processing the content can be improved.
 本発明の一態様によれば、コンテンツの処理容易性を向上させることができる。 According to one aspect of the present invention, the ease of processing the content can be improved.
本実施形態に係る情報処理システムの概要を説明する図である。It is a figure explaining the outline of the information processing system which concerns on this embodiment. 本実施形態に係る情報処理システムの機能構成を示すブロック図である。It is a block diagram which shows the functional structure of the information processing system which concerns on this embodiment. 画像生成サーバによる領域決定処理の一例を説明する図である。It is a figure explaining an example of the area determination processing by an image generation server. 画像生成サーバによる領域決定処理の一例を説明する図である。It is a figure explaining an example of the area determination processing by an image generation server. 画像生成サーバによる領域決定処理の一例を説明する図である。It is a figure explaining an example of the area determination processing by an image generation server. 画像生成サーバにより実行される処理を示すフローチャートである。It is a flowchart which shows the process executed by an image generation server. 本実施形態に係る情報処理システムに含まれる通信端末、位置測位サーバ、及び画像生成サーバサーバのハードウェア構成を示す図である。It is a figure which shows the hardware configuration of the communication terminal, the positioning server, and the image generation server server included in the information processing system which concerns on this embodiment.
 以下、添付図面を参照しながら本発明の実施形態を詳細に説明する。図面の説明において、同一又は同等の要素には同一符号を用い、重複する説明を省略する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the description of the drawings, the same reference numerals are used for the same or equivalent elements, and duplicate description is omitted.
 図1は、本実施形態に係る情報処理システム1の概要を説明する図である。情報処理システム1では、画像生成サーバ50が生成した画像が、通信端末10に表示される。すなわち、ユーザは、通信端末10の画面において、画像生成サーバ50が生成した画像を視認する。図1(a)~(c)には、画像生成サーバ50が生成し通信端末10の画面に表示される画像の例が示されている。画像生成サーバ50は、通信端末10において撮像された画像にARコンテンツ等を重畳した画像を生成する。 FIG. 1 is a diagram illustrating an outline of the information processing system 1 according to the present embodiment. In the information processing system 1, the image generated by the image generation server 50 is displayed on the communication terminal 10. That is, the user visually recognizes the image generated by the image generation server 50 on the screen of the communication terminal 10. 1 (a) to 1 (c) show an example of an image generated by the image generation server 50 and displayed on the screen of the communication terminal 10. The image generation server 50 generates an image in which AR content or the like is superimposed on the image captured by the communication terminal 10.
 情報処理システム1は、周辺空間情報に基づいて、現実空間の画像においてARコンテンツを描写(重畳)するパブリック領域と、通信端末10を携帯するユーザに係るプライベートコンテンツを描写(重畳)するプライベート領域とを決定し、各領域にコンテンツを描写した画像(描写画像)を生成する。パブリック領域とは、現実空間の物体に対応付けられたARコンテンツを描写する領域である。パブリック領域のARコンテンツとは、例えば標識、広告、アート作品、各種エンターテイメント作品に係るコンテンツである。プライベート領域とは、ユーザのプライベートなコンテンツ(プライベートコンテンツ)を描写する領域である。プライベートコンテンツとは、例えば、インターネットを利用した検索画面、ゲーム、メール、SNS、マップ等のコンテンツである。 The information processing system 1 has a public area in which AR content is depicted (superimposed) in an image in real space based on peripheral space information, and a private area in which private content related to a user carrying a communication terminal 10 is depicted (superimposed). Is determined, and an image (depicted image) in which the content is depicted in each area is generated. The public area is an area for depicting AR content associated with an object in real space. AR content in the public domain is, for example, content related to signs, advertisements, art works, and various entertainment works. The private area is an area that describes the user's private content (private content). The private content is, for example, content such as a search screen, a game, an e-mail, an SNS, a map, etc. using the Internet.
 より詳細には、情報処理システム1では、周辺空間情報に基づいて、パブリック領域、プライベート領域、及び非描写領域が決定される。すなわち、情報処理システム1では、周辺空間情報に応じて、パブリック領域、プライベート領域、及び非描写領域の大きさが変化する。非描写領域とは、コンテンツを描写することなく、ユーザの視界を確保する領域である。 More specifically, in the information processing system 1, the public area, the private area, and the non-depiction area are determined based on the peripheral space information. That is, in the information processing system 1, the sizes of the public area, the private area, and the non-depiction area change according to the peripheral space information. The non-depiction area is an area that secures the user's field of view without depicting the content.
 図1(a)に示されるように、通信端末10に表示される画像(通信端末10の画面)においては、少なくともパブリック領域A1及びプライベート領域A2が設定されている。図1(b)に示される例では、パブリック領域A1において現実空間の物体に対応付けられた複数のARコンテンツFが描写されている。具体的には、図1(b)においては、パブリック領域A1において、駅に係る建物に対応付けられた標識としてのARコンテンツF、スーパに係る建物に対応付けられた標識としてのARコンテンツF、カフェに係る建物に対応付けられた標識としてのARコンテンツF、衣料品の販売に係る建物に対応付けられた広告としてのARコンテンツFがそれぞれ描写されている。また、図1(b)に示される例では、プライベート領域A2において、マップに係るコンテンツM及びSNSに係るコンテンツSが描写されている。この場合、周辺空間情報が考慮されて、ARコンテンツを描写する建物が多い領域(図1(b)では上部)がパブリック領域A1とされ、ARコンテンツを描写する建物が少ない領域(図1(c)では下部)がプライベート領域A2とされている。そして、図1(b)に示される例では、例えばユーザの移動速度の情報等が考慮されることにより、ユーザが歩いていると特定され、パブリック領域A1及びプライベート領域A2の領域が比較的大きく設定されて、非描写領域が設定されていない。これは、ユーザが歩いているような状態においては、ユーザの視界確保のための非描写領域が不要であるためである。 As shown in FIG. 1A, at least the public area A1 and the private area A2 are set in the image (screen of the communication terminal 10) displayed on the communication terminal 10. In the example shown in FIG. 1B, a plurality of AR contents F associated with an object in the real space are depicted in the public area A1. Specifically, in FIG. 1B, in the public area A1, the AR content F as a sign associated with the building related to the station, the AR content F as the sign associated with the building related to the supermarket, and the AR content F. AR content F as a sign associated with a building related to a cafe and AR content F as an advertisement associated with a building related to the sale of clothing are depicted. Further, in the example shown in FIG. 1B, the content M related to the map and the content S related to the SNS are depicted in the private area A2. In this case, in consideration of the surrounding space information, the area where there are many buildings depicting AR content (upper part in FIG. 1B) is designated as the public area A1, and the area where there are few buildings depicting AR content (FIG. 1 (c)). In), the lower part) is the private area A2. Then, in the example shown in FIG. 1B, it is specified that the user is walking by considering, for example, information on the moving speed of the user, and the areas of the public area A1 and the private area A2 are relatively large. It is set and the non-depicted area is not set. This is because a non-depicted area for ensuring the user's field of view is unnecessary when the user is walking.
 図1(c)に示される例では、パブリック領域A1において、食事処に係る建物に対応付けられた標識としてのARコンテンツF、及びカフェに係る建物に対応付けられた標識としてのARコンテンツFがそれぞれ描写されている。また、図1(c)に示される例では、プライベート領域A2において、マップに係るコンテンツMが描写されている。そして、図1(c)に示される例では、例えばユーザの移動速度の情報等が考慮されることにより、ユーザが車に乗っていると特定され、パブリック領域A1及びプライベート領域A2の領域が比較的小さく設定されて、非描写領域A3が設定されている。この場合、周辺空間情報が考慮されて、ARコンテンツを描写する建物が多い領域(図1(b)では上部)がパブリック領域A1とされ、ARコンテンツを描写する建物が少ない領域(図1(c)では下部)がプライベート領域A2とされ、運転中において最も視界を確保したい領域(図1(c)では中央部)が非描写領域A3とされている。 In the example shown in FIG. 1 (c), in the public area A1, the AR content F as a sign associated with the building related to the restaurant and the AR content F as the sign associated with the building related to the cafe are Each is depicted. Further, in the example shown in FIG. 1 (c), the content M related to the map is depicted in the private area A2. Then, in the example shown in FIG. 1 (c), it is specified that the user is in a car by considering, for example, information on the moving speed of the user, and the areas of the public area A1 and the private area A2 are compared. The non-depiction area A3 is set to be small. In this case, in consideration of the surrounding space information, the area where there are many buildings depicting AR content (upper part in FIG. 1B) is designated as the public area A1, and the area where there are few buildings depicting AR content (FIG. 1 (c)). In), the lower part) is set as the private area A2, and the area where the most visibility is desired during driving (the central part in FIG. 1C) is set as the non-depiction area A3.
 このような情報処理システム1によれば、周辺空間情報等が変わる(すなわち、ユーザの周辺の空間情報が変わる)ことに応じて、パブリック領域A1、プライベート領域A2、及び非描写領域A3が変化することとなる。これにより、例えば、観光地等であって周辺の建物等に応じたARコンテンツを中心的に表示したい場合にはパブリック領域A1を大きくし、周辺の建物等に係るARコンテンツよりもユーザのプライベートなコンテンツを中心的に表示したい場合にはプライベート領域A2を大きくし、ユーザの視界を確保することを優先すべきである場合には非描写領域A3を大きくする等、周辺の空間情報に応じて、ユーザによるコンテンツ処理のしやすさを担保しつつ、ユーザの視界が不良になることを抑制できる。 According to such an information processing system 1, the public area A1, the private area A2, and the non-depiction area A3 change according to the change of the peripheral spatial information and the like (that is, the spatial information around the user changes). It will be. As a result, for example, when it is a tourist spot or the like and the AR content corresponding to the surrounding buildings or the like is to be displayed mainly, the public area A1 is enlarged and the user is more private than the AR contents related to the surrounding buildings or the like. If you want to display the content in the center, increase the private area A2, and if you should give priority to ensuring the user's view, increase the non-depiction area A3, etc., depending on the surrounding spatial information. While ensuring the ease of content processing by the user, it is possible to prevent the user's visibility from becoming poor.
 次に、図2を参照して、本実施形態に係る情報処理システム1の機能構成を説明する。図2は、本実施形態に係る情報処理システム1の機能構成を示すブロック図である。 Next, with reference to FIG. 2, the functional configuration of the information processing system 1 according to the present embodiment will be described. FIG. 2 is a block diagram showing a functional configuration of the information processing system 1 according to the present embodiment.
 図2に示されるように、情報処理システム1は、通信端末10と、位置測位サーバ30と、画像生成サーバ50と、を含んで構成されている。なお、図2においては通信端末10、位置測位サーバ30、及び画像生成サーバ50がそれぞれ1つのみ示されているが、情報処理システム1は、これらをそれぞれ複数含んで構成されていてもよい。 As shown in FIG. 2, the information processing system 1 includes a communication terminal 10, a positioning server 30, and an image generation server 50. Although only one communication terminal 10, a positioning server 30, and an image generation server 50 are shown in FIG. 2, the information processing system 1 may be configured to include a plurality of each.
 通信端末10は、ディスプレイを有した通信可能な端末であり、例えば、スマートフォン、タブレット型端末、PC、眼鏡型ウェアラブル端末等である。通信端末10は、カメラを有しており、周辺を撮像可能に構成されている。通信端末10は、例えばコンテンツ描写に係るアプリケーションが開始されると、実装するカメラによる撮像を開始する。また、通信端末10は、加速度センサ等のセンサを有しており、移動速度(通信端末10を携帯するユーザの移動速度)を導出可能に構成されている。また、通信端末10は、ユーザの注視点(眼鏡型ウェアラブル端末のディスプレイを介してユーザが注視する点)を取得可能に構成されている。通信端末10は、カメラによって撮像した画像(撮像画像)を位置測位サーバ30に送信する。通信端末10は、位置測位サーバ30から撮像画像に基づく位置測位結果を取得する。通信端末10は、撮像画像、ユーザの位置(位置測位結果)、ユーザの注視点、及びユーザの移動速度を画像生成サーバ50に送信する。 The communication terminal 10 is a communicable terminal having a display, and is, for example, a smartphone, a tablet-type terminal, a PC, a glasses-type wearable terminal, or the like. The communication terminal 10 has a camera and is configured to be capable of capturing an image of the surrounding area. For example, when an application related to content depiction is started, the communication terminal 10 starts imaging by a camera to be mounted. Further, the communication terminal 10 has a sensor such as an acceleration sensor, and is configured to be able to derive a moving speed (moving speed of a user carrying the communication terminal 10). Further, the communication terminal 10 is configured to be able to acquire the user's gaze point (the point at which the user gazes through the display of the eyeglass-type wearable terminal). The communication terminal 10 transmits an image (captured image) captured by the camera to the positioning server 30. The communication terminal 10 acquires a positioning result based on the captured image from the positioning server 30. The communication terminal 10 transmits an captured image, a user's position (positioning result), a user's gaze point, and a user's movement speed to the image generation server 50.
 位置測位サーバ30は、記憶部31と、測位部32と、を有している。記憶部31は、マップデータ300を記憶している。マップデータ300では、予め取得された撮像画像に含まれる特徴点の特徴量(例えば、輝度方向ベクトル)と、特徴点に関連付けられた絶対的な位置情報であるグローバル位置情報とが対応付けられている。マップデータ300は、例えば、3Dポイントクラウドである。マップデータ300は、対象物を複数の異なる方向から同時に撮像可能なステレオカメラ(図示省略)等によって予め撮像され、大量の撮像画像に基づいて生成される。特徴点とは、画像中において際立って検出される点であって、例えば、他の領域と比べて輝度(強度)が大きい(又は小さい)点である。特徴点のグローバル位置情報とは、特徴点に関連付けて設定されたグローバル位置情報であって、画像中の特徴点が示す領域についての現実世界におけるグローバル位置情報である。なお、各特徴点に対するグローバル位置情報の関連付けは、従来から周知の方法によって行うことができる。記憶部31は、マップデータ300の特徴点のグローバル位置情報として3次元のグローバル位置情報を記憶している。記憶部31は、特徴点の3次元のグローバル位置情報として、例えば、特徴点の緯度、経度及び高さを記憶している。 The positioning server 30 has a storage unit 31 and a positioning unit 32. The storage unit 31 stores the map data 300. In the map data 300, the feature amount (for example, the luminance direction vector) of the feature points included in the captured image acquired in advance is associated with the global position information which is the absolute position information associated with the feature points. There is. The map data 300 is, for example, a 3D point cloud. The map data 300 is preliminarily imaged by a stereo camera (not shown) or the like capable of simultaneously capturing an object from a plurality of different directions, and is generated based on a large number of captured images. The feature point is a point that is prominently detected in the image, and is, for example, a point where the brightness (intensity) is large (or small) as compared with other regions. The global position information of the feature point is the global position information set in association with the feature point, and is the global position information in the real world about the region indicated by the feature point in the image. It should be noted that the association of the global position information with each feature point can be performed by a conventionally known method. The storage unit 31 stores three-dimensional global position information as the global position information of the feature points of the map data 300. The storage unit 31 stores, for example, the latitude, longitude, and height of the feature points as three-dimensional global position information of the feature points.
 測位部32は、通信端末10において撮像された撮像画像と、記憶部31に記憶されているマップデータ300とに基づいて、通信端末10における撮像時の通信端末10のグローバル位置情報(3次元の位置情報)を推定する。具体的には、測位部32は、マップデータ300の特徴点と、通信端末10において撮像された撮像画像の特徴点とのマッチングを行い、撮像された撮像画像に対応するマップデータ300の領域を特定する。そして、測位部32は、特定した領域に係るマップデータ300の特徴点に関連付けられたグローバル位置情報に基づいて撮像画像の撮像位置(すなわち、撮像時における通信端末10のグローバル位置情報)を推定する。測位部32は、位置測位結果を通信端末10に送信する。 The positioning unit 32 is based on the captured image captured by the communication terminal 10 and the map data 300 stored in the storage unit 31, and the global position information (three-dimensional) of the communication terminal 10 at the time of imaging by the communication terminal 10. Position information) is estimated. Specifically, the positioning unit 32 matches the feature points of the map data 300 with the feature points of the captured image captured by the communication terminal 10, and obtains a region of the map data 300 corresponding to the captured captured image. Identify. Then, the positioning unit 32 estimates the imaging position of the captured image (that is, the global position information of the communication terminal 10 at the time of imaging) based on the global position information associated with the feature points of the map data 300 related to the specified area. .. The positioning unit 32 transmits the positioning result to the communication terminal 10.
 なお、測位結果には、グローバル位置情報に加えて撮像画像から推定される方向(ロール、ピッチ、ヨーの3次元座標中の方向)に関する情報が含まれている。また、測位部32は、通信端末10において一定の周期で撮像された撮像画像に基づいてグローバル位置情報を取得してもよいし、ユーザからの指示を受けたタイミングで通信端末10において撮像された撮像画像に基づいてグローバル位置情報を取得してもよい。位置測位結果は、位置測位サーバ30によらずに取得されるものであってもよい。すなわち、通信端末10は、撮像画像とは無関係に、従来から周知の測位方法によって、ユーザの位置(位置測位結果)を取得してもよい。 The positioning result includes information on the direction estimated from the captured image (direction in the three-dimensional coordinates of roll, pitch, and yaw) in addition to the global position information. Further, the positioning unit 32 may acquire global position information based on the captured images captured by the communication terminal 10 at a fixed cycle, or may be captured by the communication terminal 10 at the timing of receiving an instruction from the user. Global position information may be acquired based on the captured image. The positioning result may be acquired without relying on the positioning server 30. That is, the communication terminal 10 may acquire the user's position (positioning result) by a conventionally known positioning method regardless of the captured image.
 画像生成サーバ50は、取得部51と、領域決定部52と、第1特定部53と、第2特定部54と、画像生成部55と、出力部56と、記憶部57と、を有している。 The image generation server 50 includes an acquisition unit 51, an area determination unit 52, a first specific unit 53, a second specific unit 54, an image generation unit 55, an output unit 56, and a storage unit 57. ing.
 記憶部57は、現実空間の物体とARコンテンツとが対応付けられたコンテンツデータ500(コンテンツ情報)を記憶している。すなわち、記憶部57は、現実空間の物体に対応付けて、パブリック領域A1に描写するARコンテンツの情報を記憶している。記憶部57は、例えば、現実空間の物体(建物等)の形状及び位置情報に対応付けて、ARコンテンツの内容(表示内容)及びARコンテンツのサイズ等を記憶していてもよい。 The storage unit 57 stores content data 500 (content information) in which an object in real space and AR content are associated with each other. That is, the storage unit 57 stores the information of the AR content to be drawn in the public area A1 in association with the object in the real space. The storage unit 57 may store, for example, the content (display content) of the AR content, the size of the AR content, and the like in association with the shape and position information of an object (building, etc.) in the real space.
 取得部51は、ユーザ周辺の現実空間の画像を少なくとも含む周辺空間情報を取得する。取得部51は、通信端末10から送信された撮像画像(ユーザ周辺の現実空間の画像)及び該撮像画像に対応するユーザの位置(位置測位結果)を、上記周辺空間情報として取得する。なお、取得部51は、周辺空間情報として、物体の形状を示す、マップのメタデータを取得してもよい。また、取得部51は、ユーザの状態を示す情報を更に取得してもよい。取得部51は、通信端末10から送信されたユーザの視点(注視点)及び移動速度を、上記ユーザの状態を示す情報として取得する。取得部51は、取得した各情報を領域決定部52に出力する。 The acquisition unit 51 acquires peripheral space information including at least an image of the real space around the user. The acquisition unit 51 acquires the captured image (image of the real space around the user) transmitted from the communication terminal 10 and the position (positioning result) of the user corresponding to the captured image as the peripheral space information. The acquisition unit 51 may acquire the metadata of the map showing the shape of the object as the peripheral space information. Further, the acquisition unit 51 may further acquire information indicating the state of the user. The acquisition unit 51 acquires the user's viewpoint (gaze point) and movement speed transmitted from the communication terminal 10 as information indicating the state of the user. The acquisition unit 51 outputs each acquired information to the area determination unit 52.
 領域決定部52は、取得部51によって取得された周辺空間情報に基づき、現実空間の画像(撮像画像)においてコンテンツデータ500のARコンテンツを描写するパブリック領域と、ユーザに係るプライベートコンテンツを描写するプライベート領域と、ユーザの視界を確保する領域である非描写領域と、を決定する。具体的には、領域決定部52は、周辺空間情報に含まれる画像(撮像画像)から特定される周辺の建物の種別に基づいて、パブリック領域、プライベート領域、及び非描写領域を決定してもよい。例えば、領域決定部52は、撮像画像の中から信号機や交通標識等、ユーザが明確に視認する必要がある建物の領域を特定し、該領域について非描写領域としてもよい。また、領域決定部52は、コンテンツデータ500のARコンテンツが表示される可能性が高い建物種別の領域をパブリック領域とし、コンテンツデータ500のARコンテンツが表示されない可能性が高い建物種別の領域をプライベート領域としてもよい。 The area determination unit 52 describes a public area for describing the AR content of the content data 500 in a real space image (captured image) based on the peripheral space information acquired by the acquisition unit 51, and a private area for describing the private content related to the user. A region and a non-depicted region, which is an region for ensuring the user's view, are determined. Specifically, the area determination unit 52 may determine a public area, a private area, and a non-depiction area based on the type of the surrounding building specified from the image (captured image) included in the surrounding space information. good. For example, the area determination unit 52 may specify a building area such as a traffic light or a traffic sign that the user needs to clearly see from the captured image, and may use the area as a non-depiction area. Further, the area determination unit 52 sets the area of the building type in which the AR content of the content data 500 is likely to be displayed as a public area, and the area of the building type in which the AR content of the content data 500 is unlikely to be displayed is private. It may be an area.
 また、領域決定部52は、周辺空間情報から特定される周辺の建物の形状に基づいて、パブリック領域、プライベート領域、及び非描写領域を決定してもよい。例えば、領域決定部52は、撮像画像の中から建物の入口の形状等、人の出入りが多い箇所(ユーザと衝突する可能性が高い領域であってユーザが明確に視認する必要がある箇所)の形状を示す領域を特定し、該領域について非描写領域としてもよい。また、領域決定部52は、コンテンツデータ500のARコンテンツが表示される可能性が高い建物形状の領域をパブリック領域とし、コンテンツデータ500のARコンテンツが表示されない可能性が高い建物形状の領域をプライベート領域としてもよい。 Further, the area determination unit 52 may determine a public area, a private area, and a non-depiction area based on the shape of the surrounding building specified from the surrounding space information. For example, the area determination unit 52 is a place where many people come and go from the captured image, such as the shape of the entrance of the building (a place where there is a high possibility of collision with the user and the user needs to clearly see it). A region showing the shape of the above may be specified, and the region may be set as a non-depiction region. Further, the area determination unit 52 sets the area of the building shape in which the AR content of the content data 500 is likely to be displayed as a public area, and the area of the building shape in which the AR content of the content data 500 is unlikely to be displayed is private. It may be an area.
 また、領域決定部52は、取得部51によって取得されたユーザの状態を示す情報を更に考慮して、パブリック領域、プライベート領域、及び非描写領域を決定してもよい。具体的には、領域決定部52は、ユーザの注視点に基づいて、各領域を決定してもよい。例えば、領域決定部52は、ユーザの注視点及び注視点周りの領域を、ユーザが視認したい領域であると判定し、非描写領域としてもよい。或いは、領域決定部52は、ユーザの注視点及び注視点周りの領域を、ユーザに各建物等に係るARコンテンツを効果的に示すことができる領域であると判定し、パブリック領域としてもよい。 Further, the area determination unit 52 may determine the public area, the private area, and the non-depiction area in consideration of the information indicating the state of the user acquired by the acquisition unit 51. Specifically, the area determination unit 52 may determine each area based on the user's gaze point. For example, the area determination unit 52 may determine that the user's gaze point and the area around the gaze point are the areas that the user wants to see, and may set them as non-depiction areas. Alternatively, the area determination unit 52 may determine that the user's gaze point and the area around the gaze point are areas in which the AR content related to each building or the like can be effectively shown to the user, and may be set as a public area.
 また、領域決定部52は、ユーザの移動速度から特定されるユーザの移動態様に基づいて、各領域を決定してもよい。例えば、領域決定部52は、ユーザの移動速度からユーザの移動態様が自動車である(ユーザが運転中である)と特定される場合には、ユーザの視野を十分に確保する必要があると判定し、非描写領域を大きくしてもよい。また、領域決定部52は、ユーザの移動速度からユーザの移動態様が徒歩である(ユーザが歩行中である)と特定される場合には、パブリック領域及びプライベート領域を比較的大きくしてもよい。また、領域決定部52は、ユーザの移動速度から、ユーザの移動態様が、移動していないことを示す停止中である場合には、安全面等から非描写領域を設ける必要が無いと判定し、非描写領域を無しに決定してもよい。 Further, the area determination unit 52 may determine each area based on the movement mode of the user specified from the movement speed of the user. For example, the area determination unit 52 determines that it is necessary to sufficiently secure the user's field of view when the user's movement mode is specified to be a car (the user is driving) from the user's movement speed. However, the non-depiction area may be enlarged. Further, when the movement mode of the user is specified to be walking (the user is walking) from the movement speed of the user, the area determination unit 52 may make the public area and the private area relatively large. .. Further, the area determination unit 52 determines from the movement speed of the user that it is not necessary to provide a non-depiction area from the viewpoint of safety or the like when the user's movement mode is stopped, which indicates that the user is not moving. , The non-depicted area may be determined without.
 図3~図5は、画像生成サーバ50の領域決定部52による領域決定処理の一例を説明する図である。 3 to 5 are diagrams illustrating an example of area determination processing by the area determination unit 52 of the image generation server 50.
 図3に示される例では、領域決定部52が、ユーザの移動速度からユーザが運転中であると特定している。この場合、領域決定部52は、非描写領域A3を比較的大きく(例えば、パブリック領域A1及びプライベート領域A2よりも大きく)設定する。また、領域決定部52は、周辺空間情報から特定される周辺の建物の種別や形状に基づいて、パブリック領域A1、プライベート領域A2、及び非描写領域A3の大きさ及び位置を決定している。具体的には、領域決定部52は、図3に示されるように、運転するために必要な標識や信号等の領域、道路付近の領域、歩行者の飛び出しや建物(駐車場等)から車が出てくることが想定される歩道の領域を、非描写領域に決定している。また、領域決定部52は、コンテンツデータ500のARコンテンツが表示される可能性が高い建物形状の領域をパブリック領域A1とし、コンテンツデータ500のARコンテンツが表示されない可能性が高い領域(下部の領域)をプライベート領域A2としている。 In the example shown in FIG. 3, the area determination unit 52 identifies that the user is driving from the moving speed of the user. In this case, the area determination unit 52 sets the non-depiction area A3 to be relatively large (for example, larger than the public area A1 and the private area A2). Further, the area determination unit 52 determines the size and position of the public area A1, the private area A2, and the non-depiction area A3 based on the type and shape of the surrounding buildings specified from the surrounding space information. Specifically, as shown in FIG. 3, the area determination unit 52 is a area such as a sign or a signal necessary for driving, an area near a road, a pedestrian jumping out, or a car from a building (parking lot, etc.). The area of the sidewalk where is expected to appear is determined as the non-depiction area. Further, the area determination unit 52 sets the area of the building shape in which the AR content of the content data 500 is likely to be displayed as the public area A1, and the area (lower area) in which the AR content of the content data 500 is not likely to be displayed. ) Is the private area A2.
 図4に示される例では、領域決定部52が、ユーザの移動速度からユーザが歩行中であると特定している。この場合、領域決定部52は、非描写領域A3を図3の状態よりは小さく設定してもよい。また、領域決定部52は、周辺空間情報から特定される周辺の建物の種別や形状に基づいて、建物の入口部分、歩道付近の領域、車道から歩道に車が進入する可能性がある領域を、非描写領域A3に決定している。また、領域決定部52は、コンテンツデータ500のARコンテンツが表示される可能性が高い建物形状の領域をパブリック領域A1とし、コンテンツデータ500のARコンテンツが表示されない可能性が高い領域(下部の領域)をプライベート領域A2としている。 In the example shown in FIG. 4, the area determination unit 52 specifies that the user is walking based on the moving speed of the user. In this case, the area determination unit 52 may set the non-depiction area A3 smaller than the state shown in FIG. Further, the area determination unit 52 determines the entrance portion of the building, the area near the sidewalk, and the area where the vehicle may enter the sidewalk from the roadway, based on the type and shape of the surrounding building specified from the surrounding space information. , The non-depiction area A3 is determined. Further, the area determination unit 52 sets the area of the building shape in which the AR content of the content data 500 is likely to be displayed as the public area A1, and the area (lower area) in which the AR content of the content data 500 is not likely to be displayed. ) Is the private area A2.
 図5に示される例では、領域決定部52が、ユーザの移動速度からユーザが停止中であると特定している。この場合、領域決定部52は、非描写領域A3を無しに決定してもよい。また、領域決定部52は、停止中であることから、ユーザが待ち合わせ中等であって通信端末10を注視している(プライベートコンテンツを操作している)と判定し、プライベート領域A2を比較的大きく(パブリック領域A1よりも大きく)してもよい。この場合においても、例えば、領域決定部52は、紐づくARコンテンツ(パブリックコンテンツ)が多い物体の領域については、パブリック領域A1としてもよい。 In the example shown in FIG. 5, the area determination unit 52 specifies that the user is stopped based on the moving speed of the user. In this case, the area determination unit 52 may determine the non-depiction area A3 without it. Further, since the area determination unit 52 is stopped, it is determined that the user is watching the communication terminal 10 (operating the private content) because the user is in a meeting or the like, and the private area A2 is relatively large. It may be (larger than the public area A1). Also in this case, for example, the area determination unit 52 may set the area of the object having many associated AR contents (public contents) as the public area A1.
 パブリック領域とプライベート領域とは、例えばユーザの状態に応じて、以下のように設定される。ユーザが待ち合わせ中だと思われる場合には、パブリック領域を普通、プライベート領域を大きめに、ユーザが信号待ちだと思われる場合には、パブリック領域を大きめ、プライベート領域を普通に、ユーザが歩行中だと思われる場合にはパブリック領域を大きめ、プライベート領域を小さめ(或いは無し)に、ユーザが電車又はバスに乗っていると思われる場合には、パブリック領域を小さめ(或いは無し)、プライベート領域を大きめに、ユーザが運転中だと思われる場合には、パブリック領域を小さめ(或いは無し)、プライベート領域を小さめ(或いは無し)に、それぞれ設定されてもよい。 The public area and private area are set as follows, for example, according to the state of the user. If the user seems to be waiting, the public area is normal, the private area is large, and if the user seems to be waiting for a signal, the public area is large, the private area is normal, and the user is walking. If you think it is, make the public area larger, make the private area smaller (or none), and if you think the user is on a train or bus, make the public area smaller (or none), and make the private area smaller. Larger, if the user seems to be driving, the public area may be set to be smaller (or none) and the private area may be set to be smaller (or none).
 図2に戻り、第1特定部53は、コンテンツデータ500を参照することにより、パブリック領域に含まれる現実空間の物体に対応付けられたARコンテンツを特定する。第1特定部53は、例えば、ユーザの位置及び物体の形状に応じて、コンテンツデータ500におけるARコンテンツを絞り込む。 Returning to FIG. 2, the first specifying unit 53 identifies the AR content associated with the object in the real space included in the public area by referring to the content data 500. The first specific unit 53 narrows down the AR content in the content data 500 according to, for example, the position of the user and the shape of the object.
 第2特定部54は、ユーザからの指示に基づき、プライベート領域に描写されるプライベートコンテンツを特定する。第2特定部54は、例えば通信端末10のディスプレイにおいてユーザに指定されたアプリケーションを、プライベートコンテンツとして特定する。 The second specifying unit 54 specifies the private content depicted in the private area based on the instruction from the user. The second specifying unit 54 specifies, for example, an application designated by the user on the display of the communication terminal 10 as private content.
 画像生成部55は、第1特定部53によって特定されたARコンテンツがパブリック領域に描写されると共に、第2特定部54によって特定されたプライベートコンテンツがプライベート領域に描写された描写画像を生成する。画像生成部55は、ARコンテンツまでの距離やコンテンツサイズ、ユーザの移動速度等を考慮して、ARコンテンツの描写態様を変化させてもよい。この場合、画像生成部55は、例えば以下の式によりARコンテンツの描写態様を決定してもよい。なお、以下の式において、w1、w2、w3は、通信端末10の画面サイズ及びユーザの視力等に応じて変化する重みである。
描写態様=(w1×距離)×(w2×コンテンツサイズ)×(w3×速度)
The image generation unit 55 generates a depiction image in which the AR content specified by the first specific unit 53 is depicted in the public area and the private content specified by the second specific unit 54 is depicted in the private area. The image generation unit 55 may change the depiction mode of the AR content in consideration of the distance to the AR content, the content size, the movement speed of the user, and the like. In this case, the image generation unit 55 may determine the depiction mode of the AR content by, for example, the following formula. In the following equation, w1, w2, and w3 are weights that change according to the screen size of the communication terminal 10, the visual acuity of the user, and the like.
Depiction mode = (w1 x distance) x (w2 x content size) x (w3 x speed)
 出力部56は、画像生成部55によって生成された描写画像を出力する。出力部56は、通信端末10のディスプレイに描写画像を表示する。 The output unit 56 outputs a depiction image generated by the image generation unit 55. The output unit 56 displays a depiction image on the display of the communication terminal 10.
 次に、図6を参照して、画像生成サーバ50により実行される処理を説明する。図6は、画像生成サーバ50により実行される処理を示すフローチャートである。 Next, the process executed by the image generation server 50 will be described with reference to FIG. FIG. 6 is a flowchart showing a process executed by the image generation server 50.
 図6に示されるように、画像生成サーバ50は、最初に、周辺空間情報及びユーザの状態を示す情報を取得する(ステップS1)。つづいて、画像生成サーバ50は、取得した情報に基づいて、パブリック領域、プライベート領域、非描写領域の大きさ及び位置をそれぞれ決定する(ステップS2)。 As shown in FIG. 6, the image generation server 50 first acquires peripheral space information and information indicating a user's state (step S1). Subsequently, the image generation server 50 determines the sizes and positions of the public area, the private area, and the non-depicted area based on the acquired information (step S2).
 つづいて、画像生成サーバ50は、パブリック領域に表示するARコンテンツを特定する(ステップS3)。さらに、画像生成サーバ50は、プライベート領域に表示するプライベートコンテンツを特定する(ステップS4)。 Subsequently, the image generation server 50 specifies the AR content to be displayed in the public area (step S3). Further, the image generation server 50 specifies the private content to be displayed in the private area (step S4).
 最後に、画像生成サーバ50は、ARコンテンツがパブリック領域に描写されると共にプライベートコンテンツがプライベート領域に描写された描写画像を生成し(ステップS5)、当該描写画像を出力する(ステップS6)。 Finally, the image generation server 50 generates a depiction image in which the AR content is depicted in the public area and the private content is depicted in the private area (step S5), and outputs the depiction image (step S6).
 次に、本実施形態に係る情報処理システム1の作用効果について説明する。 Next, the operation and effect of the information processing system 1 according to the present embodiment will be described.
 本実施形態に係る情報処理システム1は、現実空間の物体とARコンテンツとが対応付けられたコンテンツデータ500を記憶する記憶部57と、ユーザ周辺の現実空間の画像を少なくとも含む周辺空間情報を取得する取得部51と、周辺空間情報に基づき、ARコンテンツを含むコンテンツの表示領域を決定する領域決定部52と、を備える。 The information processing system 1 according to the present embodiment acquires the storage unit 57 that stores the content data 500 in which the object in the real space and the AR content are associated with each other, and the peripheral space information including at least the image of the real space around the user. The acquisition unit 51 is provided, and the area determination unit 52 that determines the display area of the content including the AR content based on the peripheral space information is provided.
 このような情報処理システム1では、ユーザ周辺の現実空間の画像を少なくとも含む周辺空間情報に基づいて、コンテンツの表示領域が決定される。このような情報処理システム1によれば、周辺空間情報が変わる(すなわち、ユーザの周辺の空間情報が変わる)ことに応じて、表示領域が変化することとなる。これにより、周辺の空間情報に応じて各領域を調整し、ユーザによるコンテンツ処理のしやすさを向上させることができる。以上のように、本実施形態に係る情報処理システム1によれば、コンテンツの処理容易性を向上させることができる。そして、処理のしやすさが向上することによって、ユーザによる無駄な処理が減少することとなるため、システムとしての処理負荷を軽減させることができる。 In such an information processing system 1, the display area of the content is determined based on the peripheral space information including at least the image of the real space around the user. According to such an information processing system 1, the display area changes according to the change of the peripheral space information (that is, the change of the spatial information around the user). As a result, each area can be adjusted according to the surrounding spatial information, and the ease of content processing by the user can be improved. As described above, according to the information processing system 1 according to the present embodiment, the ease of processing the contents can be improved. Further, by improving the ease of processing, unnecessary processing by the user is reduced, so that the processing load as a system can be reduced.
 領域決定部52は、現実空間の画像においてコンテンツデータ500のARコンテンツを描写するパブリック領域と、ユーザに係るプライベートコンテンツを描写するプライベート領域とを決定し、情報処理システム1は、コンテンツデータ500を参照することにより、パブリック領域に含まれる現実空間の物体に対応付けられたARコンテンツを特定する第1特定部53と、ユーザからの指示に基づき、プライベート領域に描写されるプライベートコンテンツを特定する第2特定部54と、第1特定部53によって特定されたARコンテンツがパブリック領域に描写されると共に、第2特定部54によって特定されたプライベートコンテンツがプライベート領域に描写された描写画像を生成する画像生成部55と、描写画像を出力する出力部56と、を備える。 The area determination unit 52 determines a public area for describing the AR content of the content data 500 in the image in the real space and a private area for describing the private content related to the user, and the information processing system 1 refers to the content data 500. By doing so, the first specifying unit 53 that specifies the AR content associated with the object in the real space included in the public area, and the second specifying the private content that is drawn in the private area based on the instruction from the user. Image generation that generates a depiction image in which the specific unit 54 and the AR content specified by the first specific unit 53 are depicted in the public area, and the private content specified by the second specific unit 54 is depicted in the private area. A unit 55 and an output unit 56 that outputs a depiction image are provided.
 本実施形態に係る情報処理システム1では、ユーザ周辺の現実空間の画像を少なくとも含む周辺空間情報に基づいて、現実空間の画像にARコンテンツが描写(重畳)されるパブリック領域と、ユーザに係るプライベートコンテンツが描写されるプライベート領域とが決定され、それぞれの領域にそれぞれの領域に応じたコンテンツが描写された描写画像が生成・出力される。このような情報処理システム1によれば、周辺空間情報が変わる(すなわち、ユーザの周辺の空間情報が変わる)ことに応じて、パブリック領域及びプライベート領域が変化することとなる。これにより、例えば、観光地等であって周辺の建物等に応じたARコンテンツを中心的に表示したい場合にはパブリック領域を大きくし、周辺の建物等に係るARコンテンツよりもユーザのプライベートなコンテンツを中心的に表示したい場合にはプライベート領域を大きくする等、周辺の空間情報に応じて各領域を調整し、ユーザによるコンテンツ処理のしやすさを向上させることができる。以上のように、本実施形態に係る情報処理システムによれば、コンテンツの処理容易性を向上させることができる。 In the information processing system 1 according to the present embodiment, the public area in which the AR content is depicted (superimposed) on the image in the real space based on the peripheral space information including at least the image in the real space around the user, and the private area related to the user. A private area in which the content is described is determined, and a depiction image in which the content corresponding to each area is described is generated and output in each area. According to such an information processing system 1, the public area and the private area change according to the change of the peripheral space information (that is, the change of the spatial information around the user). As a result, for example, when the AR content corresponding to the surrounding buildings, etc. is to be displayed mainly in a tourist spot, the public area is enlarged, and the user's private content is larger than the AR content related to the surrounding buildings, etc. If you want to display the content in the center, you can adjust each area according to the surrounding spatial information, such as enlarging the private area, and improve the ease of content processing by the user. As described above, according to the information processing system according to the present embodiment, the ease of processing the content can be improved.
 領域決定部52は、周辺空間情報に基づき、ユーザの視界を確保する領域を、コンテンツを描写しない非描写領域に決定してもよい。このような構成によれば、ユーザの視界を確保すべき領域にコンテンツが描画されることがないため、コンテンツの描画によってユーザの視界が不良になることを抑制できる。 The area determination unit 52 may determine an area that secures the user's field of view as a non-depiction area that does not depict the content, based on the peripheral space information. According to such a configuration, since the content is not drawn in the area where the user's field of view should be secured, it is possible to prevent the user's field of view from becoming poor due to the drawing of the content.
 領域決定部52は、周辺空間情報から特定される周辺の建物の種別に基づいて、パブリック領域、プライベート領域、及び非描写領域を決定してもよい。このような構成によれば、建物の種別に応じて各領域をより適切に決定することができる。すなわち、例えば信号機や交通標識等、ユーザが明確に視認する必要がある建物の領域については非描写領域にする等が可能になり、コンテンツの処理容易性とユーザの視界確保をより適切に実施することができる。 The area determination unit 52 may determine a public area, a private area, and a non-depiction area based on the type of the surrounding building specified from the surrounding space information. According to such a configuration, each area can be determined more appropriately according to the type of building. That is, for example, a building area that needs to be clearly seen by the user, such as a traffic light or a traffic sign, can be made into a non-depiction area, and the ease of processing the content and the securing of the user's field of view are more appropriately implemented. be able to.
 領域決定部52は、周辺空間情報から特定される周辺の建物の形状に基づいて、パブリック領域、プライベート領域、及び非描写領域を決定してもよい。このような構成によれば、建物の形状に応じて各領域をより適切に決定することができる。すなわち、例えば建物の入口等、人の出入りが多い箇所(ユーザと衝突する可能性が高い領域であってユーザが明確に視認する必要がある箇所)については非描写領域にする等が可能になり、コンテンツの処理容易性とユーザの視界確保をより適切に実施することができる。 The area determination unit 52 may determine a public area, a private area, and a non-depiction area based on the shape of the surrounding building specified from the surrounding space information. According to such a configuration, each area can be determined more appropriately according to the shape of the building. That is, it is possible to make a non-depiction area for a place where many people come and go (a place where there is a high possibility of collision with the user and the user needs to clearly see) such as an entrance of a building. , Content processing ease and user visibility can be secured more appropriately.
 取得部51は、ユーザの状態を示す情報を更に取得し、領域決定部52は、ユーザの状態を更に考慮して、パブリック領域、プライベート領域、及び非描写領域を決定してもよい。このような構成によれば、周辺空間情報だけでなく、ユーザがどのような状態にあるかが考慮されるため、ユーザのコンテンツ処理容易性の観点でより適切な領域決定が可能になる。 The acquisition unit 51 may further acquire information indicating the user's state, and the area determination unit 52 may determine the public area, the private area, and the non-depiction area in consideration of the user's state. According to such a configuration, not only the peripheral space information but also the state of the user is taken into consideration, so that it is possible to determine a more appropriate area from the viewpoint of the ease of content processing of the user.
 取得部51は、ユーザの視点を少なくとも含むユーザの状態を示す情報を取得し、領域決定部52は、ユーザの視点に基づいて、パブリック領域、プライベート領域、及び非描写領域を決定してもよい。このような構成によれば、ユーザがどこを視認しているかが考慮されて、よりユーザのコンテンツ処理容易性を向上させるように各領域を決定することができる。 The acquisition unit 51 may acquire information indicating the state of the user including at least the user's viewpoint, and the area determination unit 52 may determine the public area, the private area, and the non-depiction area based on the user's viewpoint. .. According to such a configuration, it is possible to determine each area so as to further improve the user's content processing ease in consideration of where the user is visually recognizing.
 取得部51は、ユーザの移動速度を含むユーザの状態を示す情報を取得し、領域決定部52は、ユーザの移動速度から特定されるユーザの移動態様に基づいて、パブリック領域、プライベート領域、及び非描写領域を決定してもよい。このような構成によれば、例えば運転時には非描写領域を大きくし、歩行時にはパブリック領域を大きくする等、ユーザの移動態様に応じて、適切に各領域を決定することができる。 The acquisition unit 51 acquires information indicating the state of the user including the movement speed of the user, and the area determination unit 52 acquires the public area, the private area, and the area based on the movement mode of the user specified from the movement speed of the user. The non-depicted area may be determined. According to such a configuration, each area can be appropriately determined according to the movement mode of the user, for example, the non-depiction area is increased during driving and the public area is increased during walking.
 領域決定部52は、ユーザの移動態様が、移動していないことを示す停止中である場合には、非描写領域を無しに決定してもよい。停止中においては、安全面等から非描写領域を設ける必要がないので、このように制御することによって、より適切に各領域を決定することができる。 The area determination unit 52 may determine the non-depiction area without the non-depiction area when the movement mode of the user is stopped indicating that the user is not moving. Since it is not necessary to provide a non-depicted area for safety reasons during the stop, each area can be determined more appropriately by controlling in this way.
 次に、情報処理システム1に含まれた通信端末10、位置測位サーバ30、及び画像生成サーバ50のハードウェア構成について、図7を参照して説明する。上述の通信端末10、位置測位サーバ30、及び画像生成サーバ50は、物理的には、プロセッサ1001、メモリ1002、ストレージ1003、通信装置1004、入力装置1005、出力装置1006、バス1007などを含むコンピュータ装置として構成されてもよい。 Next, the hardware configurations of the communication terminal 10, the positioning server 30, and the image generation server 50 included in the information processing system 1 will be described with reference to FIG. 7. The communication terminal 10, the positioning server 30, and the image generation server 50 physically include a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like. It may be configured as a device.
 なお、以下の説明では、「装置」という文言は、回路、デバイス、ユニットなどに読み替えることができる通信端末10、位置測位サーバ30、及び画像生成サーバ50のハードウェア構成は、図に示した各装置を1つ又は複数含むように構成されてもよいし、一部の装置を含まずに構成されてもよい。 In the following description, the word "device" can be read as a circuit, device, unit, etc., and the hardware configurations of the communication terminal 10, the positioning server 30, and the image generation server 50 are shown in the figure. It may be configured to include one or more devices, or it may be configured not to include some devices.
 通信端末10、位置測位サーバ30、及び画像生成サーバ50における各機能は、プロセッサ1001、メモリ1002などのハードウェア上に所定のソフトウェア(プログラム)を読み込ませることで、プロセッサ1001が演算を行い、通信装置1004による通信や、メモリ1002及びストレージ1003におけるデータの読み出し及び/又は書き込みを制御することで実現される。 For each function of the communication terminal 10, the positioning server 30, and the image generation server 50, the processor 1001 performs calculations and communicates by loading predetermined software (programs) on hardware such as the processor 1001 and the memory 1002. It is realized by controlling communication by the device 1004 and reading and / or writing of data in the memory 1002 and the storage 1003.
 プロセッサ1001は、例えば、オペレーティングシステムを動作させてコンピュータ全体を制御する。プロセッサ1001は、周辺装置とのインターフェース、制御装置、演算装置、レジスタなどを含む中央処理装置(CPU:Central Processing Unit)で構成されてもよい。例えば、画像生成サーバ50の領域決定部52等の制御機能はプロセッサ1001で実現されてもよい。 The processor 1001 operates, for example, an operating system to control the entire computer. The processor 1001 may be configured by a central processing unit (CPU: Central Processing Unit) including an interface with peripheral devices, a control device, an arithmetic unit, a register, and the like. For example, the control function of the area determination unit 52 of the image generation server 50 may be realized by the processor 1001.
 また、プロセッサ1001は、プログラム(プログラムコード)、ソフトウェアモジュールやデータを、ストレージ1003及び/又は通信装置1004からメモリ1002に読み出し、これらに従って各種の処理を実行する。プログラムとしては、上述の実施の形態で説明した動作の少なくとも一部をコンピュータに実行させるプログラムが用いられる。 Further, the processor 1001 reads a program (program code), a software module and data from the storage 1003 and / or the communication device 1004 into the memory 1002, and executes various processes according to these. As the program, a program that causes a computer to execute at least a part of the operations described in the above-described embodiment is used.
 例えば、画像生成サーバ50の領域決定部52等の制御機能は、メモリ1002に格納され、プロセッサ1001で動作する制御プログラムによって実現されてもよく、他の機能ブロックについても同様に実現されてもよい。上述の各種処理は、1つのプロセッサ1001で実行される旨を説明してきたが、2以上のプロセッサ1001により同時又は逐次に実行されてもよい。プロセッサ1001は、1以上のチップで実装されてもよい。なお、プログラムは、電気通信回線を介してネットワークから送信されても良い。 For example, the control function of the area determination unit 52 of the image generation server 50 or the like may be realized by a control program stored in the memory 1002 and operated by the processor 1001, or may be realized in the same manner for other functional blocks. .. Although it has been described that the various processes described above are executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001. Processor 1001 may be mounted on one or more chips. The program may be transmitted from the network via a telecommunication line.
 メモリ1002は、コンピュータ読み取り可能な記録媒体であり、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、EEPROM(Electrically Erasable Programmable ROM)、RAM(Random Access Memory)などの少なくとも1つで構成されてもよい。メモリ1002は、レジスタ、キャッシュ、メインメモリ(主記憶装置)などと呼ばれてもよい。メモリ1002は、本発明の一実施の形態に係る無線通信方法を実施するために実行可能なプログラム(プログラムコード)、ソフトウェアモジュールなどを保存することができる。 The memory 1002 is a computer-readable recording medium, and is composed of at least one such as a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), and a RAM (Random Access Memory). May be done. The memory 1002 may be referred to as a register, a cache, a main memory (main storage device), or the like. The memory 1002 can store a program (program code), a software module, or the like that can be executed to implement the wireless communication method according to the embodiment of the present invention.
 ストレージ1003は、コンピュータ読み取り可能な記録媒体であり、例えば、CDROM(Compact Disc ROM)などの光ディスク、ハードディスクドライブ、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリ(例えば、カード、スティック、キードライブ)、フロッピー(登録商標)ディスク、磁気ストリップなどの少なくとも1つで構成されてもよい。ストレージ1003は、補助記憶装置と呼ばれてもよい。上述の記憶媒体は、例えば、メモリ1002及び/又はストレージ1003を含むデータベース、サーバその他の適切な媒体であってもよい。 The storage 1003 is a computer-readable recording medium, and is, for example, an optical disk such as a CDROM (Compact Disc ROM), a hard disk drive, a flexible disk, an optical magnetic disk (for example, a compact disk, a digital versatile disk, or a Blu-ray (registration)). It may consist of at least one such as a (trademark) disk), a smart card, a flash memory (eg, a card, stick, key drive), a floppy (registered trademark) disk, a magnetic strip, and the like. The storage 1003 may be referred to as an auxiliary storage device. The storage medium described above may be, for example, a database, server or other suitable medium containing memory 1002 and / or storage 1003.
 通信装置1004は、有線及び/又は無線ネットワークを介してコンピュータ間の通信を行うためのハードウェア(送受信デバイス)であり、例えばネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュールなどともいう。 The communication device 1004 is hardware (transmission / reception device) for communicating between computers via a wired and / or wireless network, and is also referred to as, for example, a network device, a network controller, a network card, a communication module, or the like.
 入力装置1005は、外部からの入力を受け付ける入力デバイス(例えば、キーボード、マウス、マイクロフォン、スイッチ、ボタン、センサなど)である。出力装置1006は、外部への出力を実施する出力デバイス(例えば、ディスプレイ、スピーカー、LEDランプなど)である。なお、入力装置1005及び出力装置1006は、一体となった構成(例えば、タッチパネル)であってもよい。 The input device 1005 is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts an input from the outside. The output device 1006 is an output device (for example, a display, a speaker, an LED lamp, etc.) that outputs to the outside. The input device 1005 and the output device 1006 may have an integrated configuration (for example, a touch panel).
 また、プロセッサ1001やメモリ1002などの各装置は、情報を通信するためのバス1007で接続される。バス1007は、単一のバスで構成されてもよいし、装置間で異なるバスで構成されてもよい。 Further, each device such as the processor 1001 and the memory 1002 is connected by the bus 1007 for communicating information. The bus 1007 may be composed of a single bus or may be composed of different buses between the devices.
 また、通信端末10、位置測位サーバ30、及び画像生成サーバ50は、マイクロプロセッサ、デジタル信号プロセッサ(DSP:Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、FPGA(Field Programmable Gate Array)などのハードウェアを含んで構成されてもよく、当該ハードウェアにより、各機能ブロックの一部又は全てが実現されてもよい。例えば、プロセッサ1001は、これらのハードウェアの少なくとも1つで実装されてもよい。 Further, the communication terminal 10, the positioning server 30, and the image generation server 50 include a microprocessor, a digital signal processor (DSP: Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), and an FPGA (Field). It may be configured to include hardware such as Programmable Gate Array), and a part or all of each functional block may be realized by the hardware. For example, the processor 1001 may be implemented on at least one of these hardware.
 以上、本実施形態について詳細に説明したが、当業者にとっては、本実施形態が本明細書中に説明した実施形態に限定されるものではないということは明らかである。本実施形態は、特許請求の範囲の記載により定まる本発明の趣旨及び範囲を逸脱することなく修正及び変更態様として実施することができる。したがって、本明細書の記載は、例示説明を目的とするものであり、本実施形態に対して何ら制限的な意味を有するものではない。 Although the present embodiment has been described in detail above, it is clear to those skilled in the art that the present embodiment is not limited to the embodiment described in the present specification. This embodiment can be implemented as an amendment or modification without departing from the spirit and scope of the present invention as determined by the description of the scope of claims. Therefore, the description of the present specification is for the purpose of illustration and does not have any limiting meaning to the present embodiment.
 本明細書で説明した各態様/実施形態は、LTE(Long Term Evolution)、LTE-A(LTE-Advanced)、SUPER 3G、IMT-Advanced、4G、5G、FRA(Future Radio Access)、W-CDMA(登録商標)、GSM(登録商標)、CDMA2000、UMB(Ultra Mobile Broad-band)、IEEE 802.11(Wi-Fi)、IEEE 802.16(WiMAX)、IEEE 802.20、UWB(Ultra-Wide Band)、Bluetooth(登録商標)、その他の適切なシステムを利用するシステム及び/又はこれらに基づいて拡張された次世代システムに適用されてもよい。 Each aspect / embodiment described in the present specification includes LTE (Long Term Evolution), LTE-A (LTE-Advanced), SUPER 3G, IMT-Advanced, 4G, 5G, FRA (Future Radio Access), W-CDMA. (Registered Trademark), GSM (Registered Trademark), CDMA2000, UMB (Ultra Mobile Broad-band), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, UWB (Ultra-Wide) Band), Bluetooth®, and other systems that utilize suitable systems and / or extended next-generation systems based on them may be applied.
 本明細書で説明した各態様/実施形態の処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本明細書で説明した方法については、例示的な順序で様々なステップの要素を提示しており、提示した特定の順序に限定されない。 The order of the processing procedures, sequences, flowcharts, etc. of each aspect / embodiment described in the present specification may be changed as long as there is no contradiction. For example, the methods described herein present elements of various steps in an exemplary order and are not limited to the particular order presented.
 入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルで管理してもよい。入出力される情報等は、上書き、更新、または追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 The input / output information and the like may be saved in a specific location (for example, memory) or may be managed by a management table. Information to be input / output may be overwritten, updated, or added. The output information and the like may be deleted. The input information or the like may be transmitted to another device.
 判定は、1ビットで表される値(0か1か)によって行われてもよいし、真偽値(Boolean:trueまたはfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。 The determination may be made by a value represented by 1 bit (0 or 1), by a boolean value (Boolean: true or false), or by comparing numerical values (for example, a predetermined value). It may be done by comparison with the value).
 本明細書で説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行うものに限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。 Each aspect / embodiment described in the present specification may be used alone, in combination, or may be switched and used according to the execution. Further, the notification of predetermined information (for example, the notification of "being X") is not limited to the explicit one, but is performed implicitly (for example, the notification of the predetermined information is not performed). May be good.
 ソフトウェアは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称で呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能などを意味するよう広く解釈されるべきである。 Software, whether called software, firmware, middleware, microcode, hardware description language, or other names, instructions, instruction sets, codes, code segments, program codes, programs, subprograms, software modules. , Applications, software applications, software packages, routines, subroutines, objects, executable files, execution threads, procedures, features, etc. should be broadly interpreted.
 また、ソフトウェア、命令などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、同軸ケーブル、光ファイバケーブル、ツイストペア及びデジタル加入者回線(DSL)などの有線技術及び/又は赤外線、無線及びマイクロ波などの無線技術を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び/又は無線技術は、伝送媒体の定義内に含まれる。 Further, software, instructions, etc. may be transmitted and received via a transmission medium. For example, the software may use wired technology such as coaxial cable, fiber optic cable, twist pair and digital subscriber line (DSL) and / or wireless technology such as infrared, wireless and microwave to website, server, or other. When transmitted from a remote source, these wired and / or wireless technologies are included within the definition of transmission medium.
 本明細書で説明した情報、信号などは、様々な異なる技術のいずれか1項を使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップなどは、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、又はこれらの任意の組み合わせによって表されてもよい。 The information, signals, etc. described herein may be represented using any one of a variety of different techniques. For example, data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description are voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. It may be represented by a combination of.
 なお、本明細書で説明した用語及び/又は本明細書の理解に必要な用語については、同一の又は類似する意味を有する用語と置き換えてもよい。 Note that the terms described in the present specification and / or the terms necessary for understanding the present specification may be replaced with terms having the same or similar meanings.
 また、本明細書で説明した情報、パラメータなどは、絶対値で表されてもよいし、所定の値からの相対値で表されてもよいし、対応する別の情報で表されてもよい。 Further, the information, parameters, etc. described in the present specification may be represented by an absolute value, a relative value from a predetermined value, or another corresponding information. ..
 通信端末は、当業者によって、移動通信端末、加入者局、モバイルユニット、加入者ユニット、ワイヤレスユニット、リモートユニット、モバイルデバイス、ワイヤレスデバイス、ワイヤレス通信デバイス、リモートデバイス、モバイル加入者局、アクセス端末、モバイル端末、ワイヤレス端末、リモート端末、ハンドセット、ユーザエージェント、モバイルクライアント、クライアント、またはいくつかの他の適切な用語で呼ばれる場合もある。 The communication terminal may be a mobile communication terminal, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communication device, a remote device, a mobile subscriber station, an access terminal, etc. It may also be referred to as a mobile device, wireless device, remote device, handset, user agent, mobile client, client, or some other suitable term.
 本明細書で使用する「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。 The statement "based on" as used herein does not mean "based on" unless otherwise stated. In other words, the statement "based on" means both "based only" and "at least based on".
 本明細書で「第1の」、「第2の」などの呼称を使用した場合においては、その要素へのいかなる参照も、それらの要素の量または順序を全般的に限定するものではない。これらの呼称は、2つ以上の要素間を区別する便利な方法として本明細書で使用され得る。したがって、第1および第2の要素への参照は、2つの要素のみがそこで採用され得ること、または何らかの形で第1の要素が第2の要素に先行しなければならないことを意味しない。 When the terms "first", "second", etc. are used herein, any reference to that element does not generally limit the quantity or order of those elements. These designations can be used herein as a convenient way to distinguish between two or more elements. Therefore, references to the first and second elements do not mean that only two elements can be adopted there, or that the first element must somehow precede the second element.
 「含む(include)」、「含んでいる(including)」、およびそれらの変形が、本明細書あるいは特許請求の範囲で使用されている限り、これら用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。さらに、本明細書あるいは特許請求の範囲において使用されている用語「または(or)」は、排他的論理和ではないことが意図される。 As long as "include", "including", and variations thereof are used herein or within the scope of the claims, these terms are similar to the term "comprising". In addition, it is intended to be inclusive. Moreover, the term "or" as used herein or in the claims is intended to be non-exclusive.
 本明細書において、文脈または技術的に明らかに1つのみしか存在しない装置である場合以外は、複数の装置をも含むものとする。 In the present specification, a plurality of devices shall be included unless the device has only one clearly present in context or technically.
 本開示の全体において、文脈から明らかに単数を示したものではなければ、複数のものを含むものとする。 In the whole of this disclosure, if it does not clearly indicate the singular from the context, it shall include multiple things.
 1…情報処理システム、51…取得部、52…領域決定部、53…第1特定部、54…第2特定部、55…画像生成部、56…出力部、57…記憶部、500…コンテンツデータ。 1 ... Information processing system, 51 ... Acquisition unit, 52 ... Area determination unit, 53 ... First specific unit, 54 ... Second specific unit, 55 ... Image generation unit, 56 ... Output unit, 57 ... Storage unit, 500 ... Content data.

Claims (9)

  1.  現実空間の物体とARコンテンツとが対応付けられたコンテンツ情報を記憶する記憶部と、
     ユーザ周辺の現実空間の画像を少なくとも含む周辺空間情報を取得する取得部と、
     前記周辺空間情報に基づき、前記ARコンテンツを含むコンテンツの表示領域を決定する領域決定部と、
    を備える情報処理システム。
    A storage unit that stores content information in which an object in real space and AR content are associated with each other,
    An acquisition unit that acquires peripheral space information including at least an image of the real space around the user,
    An area determination unit that determines a display area of content including the AR content based on the peripheral space information, and an area determination unit.
    Information processing system equipped with.
  2.  前記領域決定部は、前記現実空間の画像において前記コンテンツ情報のARコンテンツを描写するパブリック領域と、前記ユーザに係るプライベートコンテンツを描写するプライベート領域とを決定し、
     前記情報処理システムは、
     前記コンテンツ情報を参照することにより、前記パブリック領域に含まれる現実空間の物体に対応付けられた前記ARコンテンツを特定する第1特定部と、
     ユーザからの指示に基づき、前記プライベート領域に描写される前記プライベートコンテンツを特定する第2特定部と、
     前記第1特定部によって特定された前記ARコンテンツが前記パブリック領域に描写されると共に、前記第2特定部によって特定された前記プライベートコンテンツが前記プライベート領域に描写された描写画像を生成する画像生成部と、
     前記描写画像を出力する出力部と、を更に備える、請求項1記載の情報処理システム。
    The area determination unit determines a public area for describing the AR content of the content information in the image in the real space and a private area for describing the private content related to the user.
    The information processing system is
    A first specific unit that identifies the AR content associated with an object in the real space included in the public area by referring to the content information.
    Based on the instruction from the user, the second specific part that specifies the private content depicted in the private area, and
    An image generation unit that generates a depiction image in which the AR content specified by the first specific unit is depicted in the public area and the private content specified by the second specific unit is depicted in the private area. When,
    The information processing system according to claim 1, further comprising an output unit for outputting the depiction image.
  3.  前記領域決定部は、前記周辺空間情報に基づき、ユーザの視界を確保する領域を、コンテンツを描写しない非描写領域に決定する、請求項2記載の情報処理システム。 The information processing system according to claim 2, wherein the area determination unit determines an area for securing the user's field of view as a non-depiction area that does not depict the content based on the peripheral space information.
  4.  前記領域決定部は、前記周辺空間情報から特定される周辺の建物の種別に基づいて、前記パブリック領域、前記プライベート領域、及び前記非描写領域を決定する、請求項3記載の情報処理システム。 The information processing system according to claim 3, wherein the area determination unit determines the public area, the private area, and the non-depiction area based on the type of the surrounding building specified from the surrounding space information.
  5.  前記領域決定部は、前記周辺空間情報から特定される周辺の建物の形状に基づいて、前記パブリック領域、前記プライベート領域、及び前記非描写領域を決定する、請求項3又は4記載の情報処理システム。 The information processing system according to claim 3 or 4, wherein the area determination unit determines the public area, the private area, and the non-depiction area based on the shape of the surrounding building specified from the peripheral space information. ..
  6.  前記取得部は、前記ユーザの状態を示す情報を更に取得し、
     前記領域決定部は、前記ユーザの状態を更に考慮して、前記パブリック領域、前記プライベート領域、及び前記非描写領域を決定する、請求項3~5のいずれか一項記載の情報処理システム。
    The acquisition unit further acquires information indicating the state of the user, and obtains information.
    The information processing system according to any one of claims 3 to 5, wherein the area determination unit determines the public area, the private area, and the non-depiction area in consideration of the state of the user.
  7.  前記取得部は、前記ユーザの視点を少なくとも含む前記ユーザの状態を示す情報を取得し、
     前記領域決定部は、前記ユーザの視点に基づいて、前記パブリック領域、前記プライベート領域、及び前記非描写領域を決定する、請求項6記載の情報処理システム。
    The acquisition unit acquires information indicating the state of the user including at least the viewpoint of the user, and obtains information.
    The information processing system according to claim 6, wherein the area determination unit determines the public area, the private area, and the non-depiction area based on the viewpoint of the user.
  8.  前記取得部は、前記ユーザの移動速度を含む前記ユーザの状態を示す情報を取得し、
     前記領域決定部は、前記ユーザの移動速度から特定される前記ユーザの移動態様に基づいて、前記パブリック領域、前記プライベート領域、及び前記非描写領域を決定する、請求項6又は7記載の情報処理システム。
    The acquisition unit acquires information indicating the state of the user, including the movement speed of the user, and obtains information indicating the state of the user.
    The information processing according to claim 6 or 7, wherein the area determination unit determines the public area, the private area, and the non-depiction area based on the movement mode of the user specified from the movement speed of the user. system.
  9.  前記領域決定部は、前記ユーザの移動態様が、移動していないことを示す停止中である場合には、前記非描写領域を無しに決定する、請求項8記載の情報処理システム。 The information processing system according to claim 8, wherein the area determination unit determines the non-depiction area without the non-depiction area when the movement mode of the user is stopped indicating that the user is not moving.
PCT/JP2021/038963 2020-12-11 2021-10-21 Information processing system WO2022123922A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022568088A JPWO2022123922A1 (en) 2020-12-11 2021-10-21

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-205957 2020-12-11
JP2020205957 2020-12-11

Publications (1)

Publication Number Publication Date
WO2022123922A1 true WO2022123922A1 (en) 2022-06-16

Family

ID=81973621

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/038963 WO2022123922A1 (en) 2020-12-11 2021-10-21 Information processing system

Country Status (2)

Country Link
JP (1) JPWO2022123922A1 (en)
WO (1) WO2022123922A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018163266A1 (en) * 2017-03-07 2018-09-13 三菱電機株式会社 Display control device and display control method
WO2019097918A1 (en) * 2017-11-14 2019-05-23 マクセル株式会社 Head-up display device and display control method for same
JP2019113809A (en) * 2017-12-26 2019-07-11 マクセル株式会社 Head-up display device
JP2019217790A (en) * 2016-10-13 2019-12-26 マクセル株式会社 Head-up display device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019217790A (en) * 2016-10-13 2019-12-26 マクセル株式会社 Head-up display device
WO2018163266A1 (en) * 2017-03-07 2018-09-13 三菱電機株式会社 Display control device and display control method
WO2019097918A1 (en) * 2017-11-14 2019-05-23 マクセル株式会社 Head-up display device and display control method for same
JP2019113809A (en) * 2017-12-26 2019-07-11 マクセル株式会社 Head-up display device

Also Published As

Publication number Publication date
JPWO2022123922A1 (en) 2022-06-16

Similar Documents

Publication Publication Date Title
US20210312214A1 (en) Image recognition method, apparatus and non-transitory computer readable storage medium
KR102276847B1 (en) Method for providing a virtual object and electronic device thereof
US20180301111A1 (en) Electronic device and method for displaying electronic map in electronic device
US10893092B2 (en) Electronic device for sharing application and control method thereof
US8907773B2 (en) Image processing for image display apparatus mounted to vehicle
KR102222250B1 (en) Method and Apparatus for Providing Route Guidance using Reference Points
US20230160716A1 (en) Method and apparatus for displaying surrounding information using augmented reality
KR102255432B1 (en) Electronic apparatus and control method thereof
WO2012127768A1 (en) Device for vehicle, and external device screen display system
US20180164404A1 (en) Computer-readable recording medium, display control method and display control device
US20160018238A1 (en) Route inspection portals
CN110388912A (en) Plan the method, apparatus and storage medium of the flight path of flight equipment
US10089771B2 (en) Method and apparatus for non-occluding overlay of user interface or information elements on a contextual map
US20160343156A1 (en) Information display device and information display program
US20220295017A1 (en) Rendezvous assistance apparatus, rendezvous assistance system, and rendezvous assistance method
KR20150141419A (en) Method for utilizing image based on location information of the image in electronic device and the electronic device thereof
WO2022123922A1 (en) Information processing system
WO2021192873A1 (en) Positioning system
WO2022163651A1 (en) Information processing system
US20240071023A1 (en) Method and apparatus for detecting near-field object, and medium and electronic device
KR20150128302A (en) Electronic device and interconnecting method thereof
JP7529950B2 (en) Information Processing System
US20190098455A1 (en) Portable electronic device, method of controlling portable electronic device, and non-transitory computer-readable medium
US11763667B2 (en) Control device, system, and pedestrian support method
WO2021166747A1 (en) Information processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21903026

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022568088

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21903026

Country of ref document: EP

Kind code of ref document: A1