WO2022163651A1 - Système de traitement d'informations - Google Patents

Système de traitement d'informations Download PDF

Info

Publication number
WO2022163651A1
WO2022163651A1 PCT/JP2022/002689 JP2022002689W WO2022163651A1 WO 2022163651 A1 WO2022163651 A1 WO 2022163651A1 JP 2022002689 W JP2022002689 W JP 2022002689W WO 2022163651 A1 WO2022163651 A1 WO 2022163651A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information
unit
spatial structure
state
Prior art date
Application number
PCT/JP2022/002689
Other languages
English (en)
Japanese (ja)
Inventor
泰士 山本
江利子 大関
宏樹 林
修 後藤
幹生 岩村
真治 木村
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Publication of WO2022163651A1 publication Critical patent/WO2022163651A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y10/00Economic sectors
    • G16Y10/40Transportation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/60Positioning; Navigation

Definitions

  • One aspect of the present invention relates to an information processing system.
  • Document 1 describes a terminal device that uses AR (Augmented Reality) to present information that is not included in the user's field of view in real space to the user. According to such a device, the user can obtain information outside the field of view of the user, thereby improving convenience for the user.
  • AR Augmented Reality
  • the information may not be visually recognized by the user.
  • the above-mentioned information cannot be presented to the user even by the above-described conventional device, and the convenience of the user who uses the information and the individual who provides the information cannot be provided.
  • the interests of the organization may be harmed. For example, when the user's line of sight is blocked by a car, roadside tree, or the like, and the user cannot visually recognize the traffic sign, the conventional device cannot present the traffic sign to the user while driving, which impairs the safety of the user. put away. Further, for example, when a corporate advertisement on a train is blocked by passengers, straps, etc., the conventional device cannot present the advertisement to the user, and the user loses the opportunity to view the advertisement. The profits of the companies that do so will suffer.
  • One aspect of the present invention has been made in view of the above circumstances, and aims to provide an information processing system capable of appropriately presenting information that the user cannot visually recognize to the user.
  • data representing an object in a real space in a three-dimensional virtual space which represents the shape of the object at a position in the virtual space corresponding to the position of the object in the real space.
  • a storage unit that stores spatial structure data
  • an acquisition unit that acquires a user's position and a viewing state of the user
  • an identification unit that identifies spatial structure data corresponding to the user's position from the storage unit
  • an identification unit that identifies spatial structure data corresponding to the user's position from the storage unit.
  • a first estimating unit for estimating an ideal visual state in which the user can visually recognize one or more objects that can be included in the field of view of the user, based on the position of the user in the obtained spatial structure data
  • a second estimating unit that compares the ideal viewing state estimated by and the viewing state of the user, and estimates an object that the user cannot view from the difference between the ideal viewing state and the viewing state of the user
  • an output unit configured to output the object estimated by the unit to be invisible to the user so that the user can visually recognize the object.
  • a user's position and a user's viewing state are acquired, spatial structure data corresponding to the user's position is specified, and in the specified spatial structure data, the user's position and the ideal viewing state is compared with the user's viewing state. From the difference between the ideal viewing state and the user's viewing state, an object that the user cannot visually recognize is estimated, and the user's visual recognition state is estimated. The object is output so that the user can visually recognize the object that is presumed to be incomplete.
  • the information around the user is physically blocked and the user cannot visually recognize the information.
  • the interests of any individual or entity may be harmed.
  • the user's line of sight is blocked by cars, roadside trees, etc., and the user cannot see the traffic sign while driving or walking, which impairs the safety of the user.
  • company advertisements are blocked by passengers, straps, and the like on trains, the opportunity for users to see the advertisements is lost, and the profits of the company are impaired.
  • an object that the user cannot visually recognize in the real space is displayed on the user's terminal such as an AR display device. Accordingly, information that the user cannot visually recognize can be appropriately presented to the user.
  • an information processing system capable of appropriately presenting information that the user cannot visually recognize to the user.
  • FIG. 1 is a block diagram showing the functional configuration of an information processing system according to this embodiment;
  • FIG. It is a figure explaining an example of the information which a communication terminal transmits to a server. It is a figure explaining an example of the information processing in a server.
  • FIG. 1 is a block diagram showing the functional configuration of an information processing system according to this embodiment;
  • FIG. It is a figure explaining an example of the information which a communication terminal transmits to a server. It is a figure explaining an example of the information processing in a server.
  • FIG. 10 is a diagram illustrating an example of a method of presenting an object that the user cannot visually recognize to the user;
  • FIG. 10 is a diagram illustrating an example of a method of presenting an object that the user cannot visually recognize to the user;
  • 4 is a flowchart showing processing performed by the information processing system according to the embodiment;
  • 4 is a flowchart showing processing performed by the information processing system according to the embodiment;
  • FIG. 10 is a diagram illustrating an example of a method of presenting an object that the user cannot visually recognize to the user; It is a figure which shows the hardware constitutions of the communication terminal, the positioning server, and the spatial structure server which are contained in the information processing system which concerns on this embodiment.
  • the information processing system 1 shown in FIGS. 1, 2, and 4 is a system that presents information to the user that is beyond the user's line of sight but is not visible to the user. More specifically, the information processing system 1 acquires information such as the position of the communication terminal 10 and the viewing state of the user, identifies spatial structure data from the position of the communication terminal 10, identifies the spatial structure data, and estimating the ideal viewing state from the position of the communication terminal 10, comparing the ideal viewing state and the viewing state of the user, and estimating information that the user cannot view from the difference between the ideal viewing state and the viewing state of the user. , a system that presents the information to the user.
  • the user's viewing state is information indicating at least a state in which the user is actually visually recognized in the real space (details will be described later), and the ideal viewing state is, in the real space, This is information indicating a state (ideal viewing state) in which the user can visually recognize all objects that can be included in the user's field of view (details will be described later).
  • the information processing system 1 compares the ideal viewing state with the user's viewing state to identify an object that the user cannot view even though it can be in the user's field of view, and presents the object to the user. .
  • the information processing system 1 updates the object in the three-dimensional virtual space based on the obtained information such as the position of the communication terminal 10 and the visual recognition state of the user.
  • the information processing system 1 includes a communication terminal 10 , a positioning server 30 and a spatial structure server 50 .
  • the communication terminal 10 is a terminal carried by a user or a terminal arranged around the user. First, an outline of processing performed by the information processing system 1 will be described.
  • the captured image captured by the communication terminal 10 is transmitted to the positioning server 30.
  • a captured image P1 captured by the communication terminal 10 is shown.
  • the captured image P1 includes an obstacle D1 and an object X3 representing a "crosswalk traffic sign".
  • the communication terminal 10 transmits the captured image P ⁇ b>1 to the positioning server 30 .
  • the positioning server 30 acquires global position information based on the captured image P ⁇ b>1 captured by the communication terminal 10 and transmits the global position information to the communication terminal 10 .
  • Global location information is location information (absolute location information) indicated by a common coordinate system that can be used by any device.
  • Global position information includes, for example, position, direction, and tilt information. Details of the global location information will be described later.
  • Global position information is not limited to a method using a photographed image.
  • global location information may be acquired in the positioning server 30 based on information acquired using GPS, RTK, geomagnetism, Wi-Fi, Bluetooth (registered trademark), or the like.
  • the communication terminal 10 estimates the position of the communication terminal 10 by acquiring global position information from the positioning server 30 .
  • the communication terminal 10 then transmits information including at least the acquired global position information and the captured image to the spatial structure server 50 .
  • communication terminals 10 carried by users A and B hereinafter referred to as "user's communication terminals 10" or communication terminals with imaging functions placed around the users. 10 (hereinafter referred to as “communication terminals 10 arranged around”) transmits the captured image R captured by each communication terminal 10 to the positioning server 30 .
  • the communication terminal 10 of the user and the communication terminals 10 arranged in the vicinity receive the global location information of each terminal from the positioning server.
  • Each communication terminal 10 transmits global position information (user position and terminal position in FIG. 2) and captured images to the spatial structure server 50 .
  • the spatial structure server 50 stores spatial structure data (details will be described later), which is data representing objects in the real space in a three-dimensional virtual space.
  • the spatial structure server 50 receives information including at least global position information and captured images from the communication terminal 10 .
  • the spatial structure server 50 identifies spatial structure data corresponding to the global position information.
  • the spatial structure server 50 also estimates the user's visual recognition state from the captured image.
  • an object representing a "traffic light" X1 and an object X2 representing a "stop sign" are included in the spatial structure data D (see the lower right diagram in FIG. 1).
  • Objects X1, X2, and X3 are physical space objects.
  • the spatial structure server 50 identifies the spatial structure data D based on the global position information acquired from the captured image P1 by the positioning server 30 .
  • the specified spatial structure data D represents the shapes of the objects X1, X2, and X3 represented in the three-dimensional virtual space.
  • the spatial structure server 50 estimates from the captured image P1 that "the user is visually recognizing the object X3 in the real space". An example of estimating the visual recognition state of the user by the spatial structure server 50 will be described later.
  • the spatial structure server 50 is supposed to perform the process of estimating the visual recognition state of the user, etc.
  • the present invention is not limited to this.
  • a server other than the spatial structure server 50 may perform the process of estimating the visual recognition state of the user and the like.
  • the estimation process or the like may be performed by the communication terminal 10 .
  • the spatial structure server 50 updates the object information in the spatial structure data based on the objects in the real space included in the user's visual recognition state.
  • the spatial structure server 50 receives global position information and captured images from the communication terminals 10 carried by the users A and B and the communication terminals 10 such as surveillance cameras.
  • the spatial structure server 50 acquires the latest states of the objects X1, X2, and X3 in the physical space from the received global position information and captured images.
  • the spatial structure server 50 updates the objects X1, X2, and X3 to the latest state in the spatial structure data.
  • the spatial structure server 50 estimates an ideal viewing state (described later) in the specified spatial structure data based on the global position information.
  • the spatial structure server 50 compares the ideal viewing state and the user's viewing state, and estimates objects that the user cannot view from the difference between the ideal viewing state and the user's viewing state.
  • the spatial structure server 50 transmits objects that the user cannot visually recognize to the communication terminal 10 .
  • the spatial structure server 50 changes the position information of the object so that the user can visually recognize the object.
  • Spatial structure server 50 transmits the object and the changed location information to communication terminal 10 .
  • the communication terminal 10 displays the object on the screen or the like of the communication terminal 10 based on the received position information.
  • the spatial structure server 50 estimates that the state in which the objects X1, X2, and X3 are visible to the user A is the ideal viewing state of the user A, based on the global position information received from the user A. do. Based on the captured image received from user A, spatial structure server 50 estimates (acquires) as user A's viewing state that only object X3 can be viewed. The spatial structure server 50 compares User A's ideal viewing state with User A's viewing state. The spatial structure server 50 estimates from the difference between the ideal viewing state of the user A and the viewing state of the user A that the objects that the user A cannot visually recognize are the objects X1 and X2.
  • the spatial structure server 50 changes the position information of the objects X1 and X2 so that the user A can visually recognize them, and transmits the changed position information and the objects X1 and X2 to the communication terminal 10 of the user A.
  • User A's communication terminal 10 receives objects X1 and X2 from spatial structure server 50 .
  • User A's communication terminal 10 displays the received objects X1 and X2 on the terminal screen or the like corresponding to the position information after the change.
  • the spatial structure server 50 acquires the captured image P2 including the objects X1 and X2 from the user B who can visually recognize the objects X1 and X2 and the position information of the user B. .
  • the spatial structure server 50 obtains the latest states of the objects X1 and X2 from the captured image P2 and the positional information obtained.
  • the spatial structure server 50 preliminarily updates the objects X1 and X2 in the three-dimensional virtual space based on the acquired latest state.
  • the spatial structure server 50 estimates that objects X1 and X2 cannot be visually recognized by the user A, the spatial structure server 50 allows the user A to visually recognize the position information of the objects X1 and X2.
  • the spatial structure server 50 transmits the changed position information and the objects X1 and X2 to the screen G of the user A's communication terminal 10.
  • FIG. The communication terminal 10 displays the shape of the object X1 in the changed position information, and sets the displayed object as the object Y1 (see the left diagram of FIG. 3).
  • the communication terminal 10 similarly performs display processing for the object X2, and sets the displayed object as the object Y2 (see the left diagram in FIG. 3).
  • the location positioning server 30 has a storage unit 31 and a positioning unit 32 as functional components.
  • the storage unit 31 stores map data 300.
  • feature amounts for example, luminance direction vectors
  • global position information which is absolute position information associated with the feature points.
  • Map data 300 is, for example, a 3D point cloud.
  • the map data 300 is captured in advance by a stereo camera (not shown) capable of simultaneously capturing images of an object from a plurality of different directions, and is generated based on a large number of captured images.
  • a feature point is a point that is conspicuously detected in an image, and is, for example, a point that has a higher (or lower) brightness (intensity) than other regions.
  • the global position information of the feature point is global position information set in association with the feature point, and is global position information in the real world of the area indicated by the feature point in the image. Note that the global position information can be associated with each feature point by a conventionally known method.
  • the storage unit 31 stores three-dimensional global position information as global position information of feature points of the map data 300 .
  • the storage unit 31 stores, for example, the latitude, longitude and height of the feature points as the three-dimensional global position information of the feature points.
  • the storage unit 11 may store a plurality of divided map data obtained by dividing the map data 300 into certain areas according to the global position information.
  • the positioning unit 32 obtains global position information (three-dimensional location information). Specifically, the positioning unit 32 performs matching between the feature points of the map data 100 and the feature points of the captured image captured by the communication terminal 10, and determines the area of the map data 300 corresponding to the captured image. Identify. Then, the positioning unit 32 estimates the imaging position of the captured image (that is, the global position information of the communication terminal 10 at the time of imaging) based on the global position information associated with the feature points of the map data 300 related to the specified area. . The positioning unit 32 transmits the positioning result to the communication terminal 10 .
  • global position information three-dimensional location information
  • the positioning result includes information on the direction estimated from the captured image (the direction in the three-dimensional coordinates of roll, pitch, and yaw).
  • the positioning unit 32 may acquire global position information based on captured images captured by the communication terminal 10 at a constant cycle, or may acquire global position information based on captured images captured by the communication terminal 10 at the timing of receiving an instruction from the user. Global position information may be obtained based on the captured image.
  • the communication terminal 10 is a terminal carried by the user or a terminal arranged around the user.
  • the communication terminal 10 is, for example, a terminal configured to perform wireless communication.
  • the communication terminal 10 is, for example, a smart phone, a tablet terminal, a PC, a goggle-type wearable device, or the like.
  • the communication terminal 10 may be an imaging device with a communication function such as a monitoring camera or a fixed point camera. For example, when an application is executed, the communication terminal 10 captures an image with a mounted camera. Then, the communication terminal 10 transmits the captured image to the positioning server 30 and acquires the positioning result corresponding to the captured image from the positioning server 30 .
  • the communication terminal 10 acquires the global position information and the viewing direction of the user from the positioning result.
  • the communication terminal 10 acquires the user's purpose information (details will be described later) from manual input by the user, search history of the browser of the communication terminal 10, and the like.
  • the communication terminal 10 transmits the captured image, the acquired global position information, the user's viewing direction, and the user's purpose information to the spatial structure server 50 .
  • the viewing direction of the user is information specified from the positioning result obtained by the communication terminal 10 from the positioning server 30 .
  • the viewing direction of the user is the direction near the center of the captured image P1.
  • the user's purpose information is information indicating the user's action purpose.
  • the user's purpose information is specified from the user's manual input, the search history of the browser of the communication terminal 10, the application currently being used by the user, and the like.
  • the user's purpose information is, for example, information indicating that the user is currently driving, location information of the destination set by the user using a map application or the like, and information such as food that the user prefers.
  • the communication terminal 10 receives from the spatial structure server 50 an “object not visible to the user” estimated by the spatial structure server 50 based on the information transmitted to the spatial structure server 50, and transmits the object on the communication terminal 10. Output to the screen, etc.
  • the communication terminal 10 of the user A sets the objects X1 and X2, which the user A cannot visually recognize, to the position information so that the user A can visually recognize the objects X1 and X2. to change
  • the communication terminal 10 displays the object Y1 displayed in the position information in which the shape of the object X1 is changed.
  • the communication terminal 10 similarly performs display processing for the object X2, and displays the object Y2.
  • the spatial structure server 50 includes, as functional components, a storage unit 51, an acquisition unit 52, an identification unit 53, a first estimation unit 54, a second estimation unit 55, an output unit 56, and an update unit 57. and have
  • the storage unit 51 stores spatial structure data.
  • the storage unit 51 also stores the types of objects included in the spatial structure data.
  • the spatial structure data is data representing an object in the real space in a three-dimensional virtual space, and is data representing the shape of the object at a position in the virtual space corresponding to the position of the object in the real space.
  • the storage unit 51 stores data 500 in which global position information and spatial structure data are associated.
  • the spatial structure data will be explained in detail.
  • the spatial structure data is data representing an object in the real space in a three-dimensional virtual space, and is data representing the shape of the object at a position in the virtual space corresponding to the position of the object in the real space. For example, assume that a plurality of buildings (a plurality of objects) exist in a certain outdoor location in the real space.
  • the structure data of the virtual space corresponding to the outdoor location represents a ground object and a plurality of building objects arranged at the same position as the outdoor location.
  • the structure data of the virtual space corresponding to the indoor location includes wall objects, floor objects, ceiling objects, and chair objects arranged at the same position as the indoor location. is represented. That is, objects in the spatial structure data are linked with objects in the real space.
  • the spatial structure data may be data representing only the shape of a static object (an object that basically does not move). Objects of spatial structure data may include objects (virtual objects) that are not linked to objects in the real space.
  • the storage unit 51 stores the captured image acquired by the acquisition unit 52, the global position information acquired from the communication terminal 10, the viewing direction of the user, the purpose information of the user, and the movement information of the user (described later). .
  • the acquisition unit 52 acquires the user's position. Specifically, the acquisition unit 52 acquires the global location information from the communication terminal 10 . In addition, the acquisition unit 52 acquires the user's visual recognition state. Specifically, the acquisition unit 52 acquires the visual recognition state of the user based on information captured by the communication terminal 10 carried by the user or the communication terminals 10 arranged around the user.
  • the viewing state of the user is information including at least a state that the user is actually viewing, at least part of the real space, user movement information, user viewing direction, and user purpose information.
  • the acquisition unit 52 estimates an object actually included in the field of view of the user in the real space from the captured image, and acquires a state that is estimated to be actually viewed by the user in the real space.
  • the acquisition unit 52 acquires at least part of the object in the physical space from the captured image.
  • the acquisition unit 52 generates (acquires) user movement information from the global position information acquired over time. Then, the acquisition unit 52 acquires the user's viewing direction and the user's purpose information from the communication terminal 10 .
  • at least part of an object in the physical space means, for example, the shape of an object viewed from a certain direction.
  • the acquisition unit 52 acquires the captured image P1 captured by the communication terminal 10 of the user A and the global position information of the user A from the communication terminal 10 .
  • the acquiring unit 52 estimates a state in which the user A can visually recognize the object X3 (visual recognition state of the user A) based on the acquired captured image P1.
  • the acquisition unit 52 estimates the shape of the object X3 (at least part of the object in the physical space) as viewed from the imaging direction based on the acquired captured image P1.
  • the acquisition unit 52 acquires the movement history (movement information) of the user A based on changes over time in the continuously acquired global position information.
  • the acquisition unit 52 acquires the viewing direction of the user A and the purpose information of the user A from the communication terminal 10 .
  • the specifying unit 53 specifies spatial structure data corresponding to the user's position from the storage unit 51 . Specifically, based on the data 500 stored in the storage unit 51 and the global location information acquired by the acquiring unit 52, the specifying unit 53 acquires spatial structure data corresponding to the global location information of the communication terminal 10. Identify. For example, from the global position information received from the communication terminal 10 of user A, the identifying unit 53 identifies, as spatial structure data, a cylindrical three-dimensional virtual space with the position of user A as the center of the base circle. Further, the specifying unit 53 may specify the spatial structure data further based on the viewing direction of the user included in the viewing state of the user.
  • the specifying unit 53 selects, from the three-dimensional virtual space specified based on the position of the user A, the three-dimensional virtual space within the range of the viewing angle of the user A further based on the viewing direction of the user A. You may specify as spatial structure data.
  • the specifying unit 53 specifies the spatial structure data D from the user A's global position information and the user's A viewing direction.
  • the first estimating unit 54 determines an ideal visual state in which the user can visually recognize one or more objects that may be included in the user's field of view, based on the user's position in the spatial structure data specified by the specifying unit 53.
  • the first estimating unit 54 estimates one or more objects included in the spatial structure data specified by the specifying unit 53 based on the position of the user A, and determines the state in which the objects can be visually recognized. state.
  • the first estimation unit 54 estimates one or more objects included in the spatial structure data specified by the specifying unit 53 based on the position of the user A and the viewing direction of the user A, and the state in which the object can be viewed. is the ideal visibility state.
  • the first estimation unit 54 estimates that the state in which the objects X1, X2, and X3 included in the specified spatial structure data D can be visually recognized is the ideal visible state.
  • the first estimation unit 54 estimates the future position of the user from the movement information of the user included in the viewing state of the user, and estimates the ideal viewing state based on the future position of the user in the specified spatial structure data. You may Specifically, the first estimating unit 54 estimates one or a plurality of possible destinations of the user based on temporal changes in the user's global location information. Then, the first estimation unit 54 estimates an ideal viewing state for each estimated position. For example, the first estimation unit 54 estimates the user's destination as the user's future position from the user's global position information for each minute. Then, the first estimation unit 54 estimates the ideal viewing state at each future position.
  • the first estimating unit 54 identifies one or more objects from all objects that can be included in the user's field of view, taking into consideration the degree of association between the user's purpose information included in the user's viewing state and the type of object.
  • an ideal viewing state which is a state in which the specified object can be visually recognized by the user, may be estimated.
  • the first estimating unit 54 identifies, among one or more objects included in the identified spatial structure data, an object of a type highly related to the user's purpose information obtained by the obtaining unit 52. , a state in which the user can visually recognize the specified object is defined as an ideal visual state.
  • the first estimation unit 54 selects an object having a type of "traffic sign" in the estimated ideal state. It is identified as an object of a type with a high degree of relevance to the information that it exists.
  • the first estimation unit 54 selects an object having a type of "ramen” such as a signboard of a ramen shop or an advertisement for instant noodles, The object is identified as having a type highly related to the information that the user likes ramen. Then, the first estimation unit 54 sets a state in which the specified object can be visually recognized by the user as an ideal visible state.
  • the second estimating unit 55 compares the ideal viewing state estimated by the first estimating unit 54 and the user's viewing state, and determines from the difference between the ideal viewing state and the user's viewing state an object that the user cannot visually recognize. to estimate Specifically, the second estimation unit 55 identifies objects that are not included in the user's viewing state among the objects included in the ideal viewing state as difference objects. The second estimation unit 55 sets the specified difference object as an object that the user cannot visually recognize. In the example shown in FIG. 6, the second estimating unit 55 determines a state in which objects X1, X2, and X3 included in the specified spatial structure data D can be visually recognized (ideal visual state) and an object in captured image P1.
  • a state in which user A is visually recognizing X3 (user A's visual recognition state) is compared.
  • the second estimating unit 55 identifies objects X1 and X2, which are objects included in the ideal viewing state but not included in the user's viewing state, as difference objects. Presumed to be an object that cannot be visually recognized.
  • the output unit 56 outputs the object estimated by the second estimation unit 55 to be invisible to the user so that the user can visually recognize it. Specifically, the output unit 56 changes the distance between the position of the user and the position of the object estimated to be invisible to the user so that the user can visually recognize the object, Output the object estimated to be invisible to the user. As an example, the output unit 56 connects the position of the user and the position of the object with a straight line, and changes the separation distance or the like so as to bring the object closer to the user on the straight line. The output unit 56 transmits the object whose display position has been changed to the communication terminal 10 carried by the user. In the example shown in FIGS.
  • the output unit 56 arranges the objects X1 and X2 so that the objects X1 and X2 that the user A cannot visually recognize are displayed in front of the obstacle D1 when viewed from the user A. change the distance between The output unit 56 transmits the changed separation distance and the objects X1 and X2 to the user A's communication terminal 10 .
  • User A's communication terminal 10 receives the changed separation distance and objects X1 and X2 from the spatial structure server 50 .
  • the communication terminal 10 displays the object Y1 displayed at the separation distance in which the shape of the object X1 is changed.
  • the communication terminal 10 similarly performs display processing for the object X2, and displays the object Y2.
  • the updating unit 57 updates the object information in the spatial structure data based on at least a part of the object in the physical space included in the user's visual recognition state acquired by the acquiring unit 52 . Specifically, the updating unit 57 updates the object information in the spatial structure data stored in the storage unit 51 based on at least part of the object in the physical space estimated from the captured image by the acquiring unit 52. update to In the example shown in FIG. 1, the communication terminal 10 of the user and the communication terminals 10 arranged around capture images of the objects X1 and X2, and the captured images, the global position information, and the imaging direction (viewing direction of the user) are combined into a spatial structure. It transmits to the server 50 in real time.
  • the acquisition unit 52 estimates the positions and/or shapes of the objects X1 and X2 viewed from a certain direction from the captured image and the imaging direction.
  • the updating unit 57 updates the positions, shapes, or shapes of the objects X1 and X2 in the spatial structure data stored in the storage unit 51 based on the information about the objects X1 and X2 viewed from a certain direction estimated (acquired) by the acquiring unit 52 . Both are updated in real time.
  • the processing performed by the information processing system 1 specifically, information such as the position of the communication terminal 10 carried by the user and the visual recognition state of the user are acquired, and the ideal The visual recognition state is estimated, the ideal visual recognition state and the user's visual recognition state are compared, and from the difference between the ideal visual recognition state and the user's visual recognition state, the information that is ahead of the user's line of sight but is not visually recognized by the user is estimated.
  • the process of presenting the information to the user will be described with reference to FIG.
  • processing for updating an object in the three-dimensional virtual space based on the acquired information will be described with reference to FIG. 9 and 10 are flowcharts showing the processing performed by the information processing system 1.
  • FIG. 9 and 10 are flowcharts showing the processing performed by the information processing system 1.
  • the acquisition unit 52 acquires the user's position and the user's visual recognition state (step S101). Specifically, at the same time that the global position information is acquired by the acquisition unit 52, a state that is estimated to be actually viewed by the user in the real space, at least a part of the object in the real space, and movement information of the user. , the user's viewing direction, and the user's purpose information may be obtained. Subsequently, the spatial structure data corresponding to the user's position is specified by the specifying unit 53 (step S102). Further, the specifying unit 53 may specify the spatial structure data further based on the user's viewing direction included in the user's viewing state.
  • the first estimating unit 54 estimates the ideal viewing state based on the user's position in the spatial structure data specified by the specifying unit 53 (step S103). Also, the first estimation unit 54 may estimate the ideal viewing state further based on the viewing direction of the user. In addition, the future position of the user may be estimated by the first estimation unit 54 from the movement information of the user. The first estimation unit 54 may estimate the ideal viewing state based on the user's future position in the specified spatial structure data. Furthermore, the first estimation unit 54 may specify one or more objects from all objects that can be included in the user's field of view, taking into account the degree of association between the user's purpose information and the type of object. Then, the first estimation unit 54 estimates an ideal viewing state in which the user can visually recognize all the specified objects.
  • the second estimation unit 55 compares the ideal viewing state estimated by the first estimation unit 54 with the viewing state of the user. Then, the second estimation unit 55 estimates an object that the user cannot visually recognize from the difference between the ideal visual recognition state and the user's visual recognition state (step S104). Subsequently, the output unit 56 outputs the object estimated to be invisible to the user so that the user can visually recognize the object (step S105). Specifically, the output unit 56 changes the separation distance between the user's position and the position of the object that the user cannot visually recognize. Then, the output unit 56 outputs the object.
  • the user's position and the user's visual recognition state are acquired by the acquisition unit 52, as in step S101 (step S201). Specifically, the acquiring unit 52 estimates what objects are included in the image captured by the communication terminal 10 of the terminal carried by the user or the communication terminal 10 such as a surveillance camera placed around the user. Then, at least part of the object in the physical space and the imaging direction (user's viewing direction) are obtained.
  • the update unit 57 updates the object in the spatial structure data based on at least a part of the object in the physical space included in the user's visual recognition state acquired by the acquisition unit 52 (step S202).
  • the information processing system 1 uses data representing an object in the real space in a three-dimensional virtual space, and a space representing the shape of the object at a position in the virtual space corresponding to the position of the object in the real space.
  • a storage unit 51 that stores structure data
  • an acquisition unit 52 that acquires the position of the user and the visual recognition state of the user
  • a first estimating unit 54 that estimates an ideal visual state in which the user can visually recognize one or more objects that can be included in the field of view of the user based on the position of the user in the spatial structure data specified by the unit 53; a second estimation for comparing the ideal viewing state estimated by the first estimation unit 54 and the viewing state of the user, and estimating an object that the user cannot view from the difference between the ideal viewing state and the viewing state of the user
  • an output unit 56 that outputs the object estimated by the second estimation unit 55 to be invisible to the user so that the user can visually recognize the object.
  • the position of the user and the viewing state of the user are acquired, the spatial structure data corresponding to the position of the user is specified, and in the specified spatial structure data, based on the position of the user Then, the ideal viewing state is estimated, and the ideal viewing state is compared with the user's viewing state. From the difference between the ideal viewing state and the user's viewing state, the object that the user cannot see is estimated, and the object that the user cannot see is estimated. The object that is estimated to be absent is output so that the user can visually recognize the object.
  • the information around the user is physically blocked and the user cannot visually recognize the information.
  • the interests of any individual or entity may be harmed.
  • traffic signs may be obstructed by cars, roadside trees, etc., thereby reducing the safety of the user while driving or walking.
  • company advertisements are blocked by passengers, straps, and the like on trains, the opportunity for users to see the advertisements is lost, and the profits of the company are impaired.
  • an object that the user cannot visually recognize in the real space is displayed on the user's terminal such as an AR display device. Accordingly, information that the user cannot visually recognize can be appropriately presented to the user. As a result, it is possible to secure the convenience of the user who uses the information and the benefit of the individual or group who provides the information.
  • the traffic light when the user is driving, the traffic light is blocked by a truck running in front of the car that the user is driving.
  • the traffic light is about to change from yellow to red
  • the traffic light cannot be seen by the user while driving, the car driven by the user runs through the intersection even though the traffic light is red.
  • the traffic light is displayed as an object Z on the user's communication terminal 10 such as an AR display device. Accordingly, the traffic signal, which is information that the user cannot visually recognize, can be appropriately presented to the user.
  • the output unit 56 changes the separation distance between the position of the user and the position of the object estimated to be invisible to the user so that the user can visually recognize the object that is not visible to the user. Output the object estimated to be absent. With such a configuration, an object that is not visible to the user is moved to a position visible to the user. Accordingly, information that the user cannot visually recognize can be appropriately presented to the user.
  • the acquisition unit 52 may acquire the visual recognition state of the user based on information captured by the communication terminal 10 carried by the user or the communication terminals 10 arranged around the user.
  • the viewing state of the user is acquired from various terminals. Accordingly, information that the user cannot visually recognize can be more appropriately presented to the user.
  • the user's visual recognition state is acquired by the communication terminal 10 carried by the user or a surveillance camera (communication terminal 10) placed on the street. This makes it possible to appropriately present an object that the user cannot visually recognize to the user.
  • the information processing system 1 further includes an updating unit 57 that updates the information of the object in the spatial structure data based on the object in the real space included in the visual recognition state of the user acquired by the acquiring unit 52.
  • an updating unit 57 that updates the information of the object in the spatial structure data based on the object in the real space included in the visual recognition state of the user acquired by the acquiring unit 52.
  • the object in the spatial structure data is updated to the latest state by a terminal carried by the user or a surveillance camera placed around the user.
  • the information of the transmitted object will be up-to-date. Accordingly, information that the user cannot visually recognize can be presented appropriately to the user.
  • the acquisition unit 52 acquires the user's viewing state including the user's viewing direction
  • the specifying unit 53 specifies the spatial structure data based on the user's viewing direction and the user's position included in the user's viewing state. do.
  • the spatial structure data is specified and the ideal viewing state is estimated only in the direction in which the user visually recognizes, thereby reducing the processing amount of the information processing system 1 in the above-mentioned specification and the above-mentioned estimation. be.
  • the load on the information processing system 1 can be reduced without impairing user convenience.
  • the acquisition unit 52 acquires the user's visual recognition state including the user's movement information
  • the first estimation unit 54 estimates the user's future position from the user's movement information. Estimate an ideal viewing state based on the future position of . According to such a configuration, an ideal viewing state is estimated at a position to which the user may move in the future. This makes it possible to present information that the user cannot visually recognize to the user while suppressing delay.
  • the user's movement information includes the user's movement history and movement purpose
  • the user's destination is predicted and information that the user cannot visually recognize in advance is estimated.
  • the user's destination is predicted and information that the user cannot visually recognize in advance is estimated.
  • the storage unit 51 further stores the type of object
  • the acquisition unit 52 acquires the user's viewing state including the user's purpose information
  • the first estimation unit 54 stores the user's purpose information included in the user's viewing state.
  • One or a plurality of objects are specified from all objects that can be included in the user's field of view, and an ideal viewing state in which the user can visually recognize the specified objects is estimated.
  • the object associated with the user's target information is presumed as an object that the user cannot visually recognize, and the object is presented to the user.
  • the user's convenience and the user's experience quality can be improved.
  • the communication terminal 10, the positioning server 30, and the spatial structure server 50 included in the information processing system 1 physically include a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like. It may be configured as a computer device.
  • the term "apparatus” can be read as a circuit, device, unit, or the like.
  • the hardware configuration of the communication terminal 10, the positioning server 30, and the spatial structure server 50 may be configured to include one or more of each device shown in FIG. may be configured to
  • Each function of the communication terminal 10, the positioning server 30, and the spatial structure server 50 is performed by causing the processor 1001 to perform calculations by loading predetermined software (programs) onto hardware such as the processor 1001 and the memory 1002. It is realized by controlling communication by the communication device 1004 and reading and/or writing of data in the memory 1002 and the storage 1003 .
  • the processor 1001 for example, operates an operating system and controls the entire computer.
  • the processor 1001 may be configured with a central processing unit (CPU) including an interface with peripheral devices, a control device, an arithmetic device, registers, and the like.
  • CPU central processing unit
  • the control functions such as the positioning unit 32 of the location positioning server 30 may be realized by the processor 1001 .
  • the processor 1001 also reads programs (program codes), software modules and data from the storage 1003 and/or the communication device 1004 to the memory 1002, and executes various processes according to them.
  • programs program codes
  • software modules software modules
  • data data from the storage 1003 and/or the communication device 1004
  • the program a program that causes a computer to execute at least part of the operations described in the above embodiments is used.
  • control functions of the positioning unit 32 of the location positioning server 30 may be stored in the memory 1002 and implemented by a control program running on the processor 1001, and other functional blocks may be similarly implemented. Although it has been described that the above-described various processes are executed by one processor 1001, they may be executed by two or more processors 1001 simultaneously or sequentially. Processor 1001 may be implemented with one or more chips. Note that the program may be transmitted from a network via an electric communication line.
  • the memory 1002 is a computer-readable recording medium, and is composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc. may be
  • ROM Read Only Memory
  • EPROM Erasable Programmable ROM
  • EEPROM Electrical Erasable Programmable ROM
  • RAM Random Access Memory
  • the memory 1002 may also be called a register, cache, main memory (main storage device), or the like.
  • the memory 1002 can store executable programs (program codes), software modules, etc. for implementing a wireless communication method according to an embodiment of the present invention.
  • the storage 1003 is a computer-readable recording medium, for example, an optical disc such as a CDROM (Compact Disc ROM), a hard disk drive, a flexible disc, a magneto-optical disc (for example, a compact disc, a digital versatile disc, a Blu-ray (registered disk), smart card, flash memory (eg, card, stick, key drive), floppy disk, magnetic strip, and/or the like.
  • Storage 1003 may also be called an auxiliary storage device.
  • the storage medium described above may be, for example, a database, server, or other suitable medium including memory 1002 and/or storage 1003 .
  • the communication device 1004 is hardware (transmitting/receiving device) for communicating between computers via a wired and/or wireless network, and is also called a network device, network controller, network card, communication module, etc., for example.
  • the input device 1005 is an input device (for example, keyboard, mouse, microphone, switch, button, sensor, etc.) that receives input from the outside.
  • the output device 1006 is an output device (eg, display, speaker, LED lamp, etc.) that outputs to the outside. Note that the input device 1005 and the output device 1006 may be integrated (for example, a touch panel).
  • Each device such as the processor 1001 and the memory 1002 is connected by a bus 1007 for communicating information.
  • the bus 1007 may be composed of a single bus, or may be composed of different buses between devices.
  • the communication terminal 10, the positioning server 30, and the spatial structure server 50 include a microprocessor, a digital signal processor (DSP), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), an FPGA ( Field Programmable Gate Array) may be included, and part or all of each functional block may be realized by the hardware.
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuit
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • processor 1001 may be implemented with at least one of these hardware.
  • the information processing system 1 has been described as including the communication terminal 10, the positioning server 30, and the spatial structure server 50, but is not limited to this, and each function of the information processing system 1 is It may be implemented by the spatial structure server 50 .
  • the spatial structure server 50 includes a storage unit 51, an acquisition unit 52, an identification unit 53, a first estimation unit 54, a second estimation unit 55, an output unit 56, and an update unit 57. and .
  • another server or communication terminal 10 may include a part or the whole of each of these functional components.
  • LTE Long Term Evolution
  • LTE-A Long Term Evolution-Advanced
  • SUPER 3G IMT-Advanced
  • 4G 5G
  • FRA Full Radio Access
  • W-CDMA registered trademark
  • GSM registered trademark
  • CDMA2000 Code Division Multiple Access 2000
  • UMB Universal Mobile Broad-band
  • IEEE 802.11 Wi-Fi
  • IEEE 802.16 WiMAX
  • IEEE 802.20 UWB (Ultra-Wide Band)
  • Bluetooth® other suitable systems and/or extended next generation systems based on these.
  • Input and output information may be saved in a specific location (for example, memory) or managed in a management table. Input/output information and the like may be overwritten, updated, or appended. The output information and the like may be deleted. The entered information and the like may be transmitted to another device.
  • the determination may be made by a value represented by one bit (0 or 1), by a true/false value (Boolean: true or false), or by numerical comparison (for example, a predetermined value).
  • notification of predetermined information is not limited to being performed explicitly, but may be performed implicitly (for example, not notifying the predetermined information). good too.
  • Software whether referred to as software, firmware, middleware, microcode, hardware description language or otherwise, includes instructions, instruction sets, code, code segments, program code, programs, subprograms, and software modules. , applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, and the like.
  • software, instructions, etc. may be transmitted and received via a transmission medium.
  • the software can be used to access websites, servers, or other When transmitted from a remote source, these wired and/or wireless technologies are included within the definition of transmission media.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. may refer to voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. may be represented by a combination of
  • information, parameters, etc. described in this specification may be represented by absolute values, may be represented by relative values from a predetermined value, or may be represented by corresponding other information. .
  • Communication terminals are defined by those skilled in the art as mobile communication terminals, subscriber stations, mobile units, subscriber units, wireless units, remote units, mobile devices, wireless devices, wireless communication devices, remote devices, mobile subscriber stations, access terminals, It may also be called a mobile terminal, wireless terminal, remote terminal, handset, user agent, mobile client, client or some other suitable term.
  • any reference to the elements does not generally limit the quantity or order of those elements. These designations may be used herein as a convenient method of distinguishing between two or more elements. Thus, references to first and second elements do not imply that only two elements may be employed therein or that the first element must precede the second element in any way.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Economics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Computer Hardware Design (AREA)
  • Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Automation & Control Theory (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Navigation (AREA)

Abstract

Ce système de traitement d'informations comporte : une unité de stockage pour stocker des données de structure spatiale ; une unité d'acquisition pour acquérir l'emplacement d'un utilisateur et un état de visibilité de l'utilisateur ; une unité d'identification pour identifier des données de structure spatiale correspondant à l'emplacement de l'utilisateur depuis l'unité de stockage ; une première unité d'estimation pour estimer un état de visibilité idéal dans lequel l'utilisateur peut visualiser tous les objets qui pourraient être inclus dans le champ visuel de l'utilisateur, sur la base de l'emplacement de l'utilisateur, dans les données de structure spatiale identifiées par l'unité d'identification ; une seconde unité d'identification pour comparer l'état de visibilité idéal identifié par la première unité d'identification et l'état de visibilité de l'utilisateur, et estimer tout objet que l'utilisateur ne peut pas visualiser à partir de la différence entre l'état de visibilité idéal estimé par la première unité d'estimation et l'état de visibilité de l'utilisateur ; et une unité de sortie pour délivrer en sortie, de façon à être visible par l'utilisateur, tout objet estimé ne devant pas être visualisé par l'utilisateur par la seconde unité d'estimation.
PCT/JP2022/002689 2021-01-29 2022-01-25 Système de traitement d'informations WO2022163651A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021013158A JP2024049400A (ja) 2021-01-29 2021-01-29 情報処理システム
JP2021-013158 2021-01-29

Publications (1)

Publication Number Publication Date
WO2022163651A1 true WO2022163651A1 (fr) 2022-08-04

Family

ID=82653583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/002689 WO2022163651A1 (fr) 2021-01-29 2022-01-25 Système de traitement d'informations

Country Status (2)

Country Link
JP (1) JP2024049400A (fr)
WO (1) WO2022163651A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004234253A (ja) * 2003-01-29 2004-08-19 Canon Inc 複合現実感呈示方法
JP2015153426A (ja) * 2014-02-18 2015-08-24 ハーマン インターナショナル インダストリーズ インコーポレイテッド 着目場所の拡張視界の生成
JP2016206447A (ja) * 2015-04-23 2016-12-08 セイコーエプソン株式会社 頭部装着型表示装置、情報システム、頭部装着型表示装置の制御方法、および、コンピュータープログラム
JP2018146326A (ja) * 2017-03-03 2018-09-20 日立オートモティブシステムズ株式会社 移動体の位置推定装置及び方法
JP2019197499A (ja) * 2018-05-11 2019-11-14 株式会社スクウェア・エニックス プログラム、記録媒体、拡張現実感提示装置及び拡張現実感提示方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004234253A (ja) * 2003-01-29 2004-08-19 Canon Inc 複合現実感呈示方法
JP2015153426A (ja) * 2014-02-18 2015-08-24 ハーマン インターナショナル インダストリーズ インコーポレイテッド 着目場所の拡張視界の生成
JP2016206447A (ja) * 2015-04-23 2016-12-08 セイコーエプソン株式会社 頭部装着型表示装置、情報システム、頭部装着型表示装置の制御方法、および、コンピュータープログラム
JP2018146326A (ja) * 2017-03-03 2018-09-20 日立オートモティブシステムズ株式会社 移動体の位置推定装置及び方法
JP2019197499A (ja) * 2018-05-11 2019-11-14 株式会社スクウェア・エニックス プログラム、記録媒体、拡張現実感提示装置及び拡張現実感提示方法

Also Published As

Publication number Publication date
JP2024049400A (ja) 2024-04-10

Similar Documents

Publication Publication Date Title
US10445945B2 (en) Directional and X-ray view techniques for navigation using a mobile device
US20230056006A1 (en) Display of a live scene and auxiliary object
CN111044061A (zh) 一种导航方法、装置、设备及计算机可读存储介质
CN111443882B (zh) 信息处理装置、信息处理系统、以及信息处理方法
US20220076469A1 (en) Information display device and information display program
US20230392943A1 (en) Server apparatus and information processing method
US20160205355A1 (en) Monitoring installation and method for presenting a monitored area
CN111064936A (zh) 一种路况信息显示方法及ar设备
JP2011113245A (ja) 位置認識装置
WO2022163651A1 (fr) Système de traitement d'informations
US20220198794A1 (en) Related information output device
US20220295017A1 (en) Rendezvous assistance apparatus, rendezvous assistance system, and rendezvous assistance method
JP7198966B2 (ja) 測位システム
US20180293796A1 (en) Method and device for guiding a user to a virtual object
JP2005309537A (ja) 情報提示装置
WO2022123922A1 (fr) Système de traitement de l'information
US10345965B1 (en) Systems and methods for providing an interactive user interface using a film, visual projector, and infrared projector
WO2021166747A1 (fr) Système de traitement d'informations
JP7482971B1 (ja) 情報処理装置、プログラム、システム、及び情報処理方法
WO2021172137A1 (fr) Système de partage de contenu et terminal
WO2023008277A1 (fr) Système de partage de contenu
US20220309754A1 (en) Information processing device, information processing method, and program
US20240087157A1 (en) Image processing method, recording medium, image processing apparatus, and image processing system
CN116249074A (zh) 智能穿戴设备的信息展示方法、智能穿戴设备及介质
CN116056017A (zh) 智能穿戴设备的信息展示方法、智能穿戴设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22745871

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22745871

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP