EP3891719A1 - Procédé à base d'objet et système de recherche de solution collaborative - Google Patents
Procédé à base d'objet et système de recherche de solution collaborativeInfo
- Publication number
- EP3891719A1 EP3891719A1 EP19812794.6A EP19812794A EP3891719A1 EP 3891719 A1 EP3891719 A1 EP 3891719A1 EP 19812794 A EP19812794 A EP 19812794A EP 3891719 A1 EP3891719 A1 EP 3891719A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- objects
- projection zone
- projection
- recording
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000008569 process Effects 0.000 claims abstract description 16
- 238000013480 data collection Methods 0.000 claims abstract description 8
- 230000009471 action Effects 0.000 claims description 25
- 230000008859 change Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 8
- 238000011156 evaluation Methods 0.000 claims description 6
- 230000002123 temporal effect Effects 0.000 claims description 4
- 238000001514 detection method Methods 0.000 abstract description 2
- 230000003993 interaction Effects 0.000 description 29
- 230000005540 biological transmission Effects 0.000 description 5
- 239000011449 brick Substances 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- VVQNEPGJFQJSBK-UHFFFAOYSA-N Methyl methacrylate Chemical compound COC(=O)C(C)=C VVQNEPGJFQJSBK-UHFFFAOYSA-N 0.000 description 2
- 229920005372 Plexiglas® Polymers 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000008846 dynamic interplay Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 230000036632 reaction speed Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B25/00—Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes
- G09B25/04—Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes of buildings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B25/00—Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes
- G09B25/06—Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes for surveying; for geography, e.g. relief models
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B25/00—Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes
- G09B25/08—Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes of scenic effects, e.g. trees, rocks, water surfaces
Definitions
- the present invention relates to an object-based method and system for collaborative solution finding, with which user data are generated and processed and the result is visualized.
- WO 2014/070120 describes an “augmented reality” method with which several people can simultaneously and individually examine an object.
- an arrangement consisting of a mobile recording device (121), which is also a display (300), and a coded surface (500) on which a physical object (501) can be located or on which a virtual object is displayed becomes.
- the recording device (a camera) and the display are arranged separately from one another.
- the ratio of the coded surface, which is located in the recording area of the camera, to the total area is advantageously 0.01.
- the coded surface can also have a multiplicity of differently coded segments, ie a type of grid.
- Disadvantage of the aforementioned system is that by using Lego blocks that are specifically coded, only buildings can be replicated, so that the solution is essentially limited to how high the buildings should be, what is housed in them ( Restaurant, apartment, factory or university) and how this changes the density of traffic and / or population, and whether this development can create an innovative environment.
- the system is fixed to the square / rectangular shapes and can only partially depict differently shaped elements such as Trees, lakes, diagonal streets, as is common in many European cities.
- the inventors have now found a computer-based system or device and a computer-implemented method that allows a lot more degrees of freedom for the question and thus finding solutions than the known systems and methods, so that it can be used more widely.
- the known systems is also only in relation worked on surfaces, ie in the x, y coordinate system, while with the present system the z coordinate, ie 3-dimensional, is also recorded.
- the acceptance of the users is very large.
- anyone can be a user.
- the system can also be used by laypersons and experts without any technical training, technical skill or spatial imagination. Therefore, when using the invention, there is a high level of participation in solution finding processes with valid and representative solution results for e.g. a city worth living in.
- the solution is based on broader data, since technology-skeptical or technology-inexperienced target groups are also involved, so that the results as a whole are more meaningful and reliable.
- the system according to the invention is also suitable for simulating the effects of user actions on the content of the projection zone.
- the high level of user acceptance is due in particular to the use of physically tangible, non-digital objects which are positioned and arranged on an area onto which content / information is projected or which are introduced into a virtual environment. As well as the directly available system-generated feedback to the user or users. Immediately after setting and / or moving the object or objects, the users see the result of their user action through the intelligent adaptation of the information and content visible in the projection zone and on the display unit.
- the invention therefore relates to a device or system as defined in the claims and the method described therein.
- the invention relates in particular in a [embodiment A] to a computer-aided method for collaborative solution finding, comprising the following steps: i) recording the position of one or more intelligent objects within the projection zone and / or recording the change in his or her position via one certain period of time by a recording unit; ii) recording the position of the projection zone in relation to the intelligent object; iii) processing the received information in the processor; and iv) decoding the data and controlling the information shown on the display unit and / or projection surface, the decoding of the intelligent objects taking place without the aid of a code plan, since the objects are assigned to the objects on the software side and the contents displayed on the projection zone without the aid of a code plan or fixed grid can be read.
- the invention relates to the method as in [embodiment B].
- the software-side attributes comprising the following parameters: a numerically unique identifier of each intelligent object, the textual description of each intelligent object, the color of the intelligent object, the textual description of the surface structure of the intelligent object, the textual description of the form of the intelligent object, the size of the intelligent object, the description of included actions of the intelligent object, the action of the intelligent object in the form that it adds or reduces something to a content on the projection surface, the action of the intelligent object in the form that the user marked a content on the projection surface positively or negatively.
- the invention relates to the method as in [embodiment C].
- the user actions with intelligent objects include the following parameters, namely the numerical unique designation of a user action with an intelligent object, the numerical unique designation of a user, the recording of time, date, time, second, millisecond, the recording of the x -Coordinate on the projection surface, the recording of the y-coordinate on the projection surface, the recording of the z-coordinate on the projection surface, the speed of the user action with an intelligent object and the direction of the user action with the intelligent object.
- the invention relates to a system for collaborative solution finding for carrying out the method as described in [embodiment A], which is formed from a data collection unit 101, a processor 104, a control unit 312 and a display unit 109, wherein the data collection unit 101 comprises a projection zone and a recording unit 105, which records and processes the position and / or movement of any objects in the projection zone and a result for the solution finding on a Forwarding display unit 109 and possibly returning another result to the projection zone.
- the aforementioned data collection unit 101 preferably records the position and / or movement of any objects in the projection zone, processes them and displays the contents on a display unit 109 and, if necessary, returns a further result to the projection zone.
- the invention relates to the system as described in [embodiment D], which is characterized in that the content of the projection zone changes dynamically.
- the invention relates to the method and the use of the system according to [embodiments D and E] for planning infrastructure and urban concepts.
- the invention differs in particular from the known and described systems and methods in that the objects used are intelligent. I.e. the objects are recognized by the recording system and, as a result of which they are referred to below as “intelligent objects”.
- the objects have attributes that are recognized independently by the system and without the aid of a code plan. They are identified by using self-learning algorithms. This is particularly advantageous because any objects can thus be used in the method according to the invention.
- Realistic objects offer low-threshold access for users to collaborative solution finding. This results in a wide range of possible uses for the method and system and device according to the invention, and the acceptance of the users is thereby positively influenced.
- optically and system-recognizable attributes are e.g. Shape of the object, color, size, surface texture, texture and haptic parameters. Attributes recognized by the system are also called software attributes. If the system does not recognize the objects, a learning process starts to recognize these objects in the future.
- the objects are real, 3-dimensional objects and easily movable. Similar to playing pieces in a board game, they are moved in the projection zone, preferably on the projection surface.
- Objects according to the invention are, for example, game pieces, Miniature figures, cars, drones, houses, street signs and trees. Schematic 3-dimensional objects can be used equally (eg emoticons).
- a projection zone is understood to mean a location at which the desired scenario is shown.
- the projection zone can be a surface (“projection surface”) or a room (“projection space”).
- the projection surface has only x and y coordinates, while the projection space has x, y and z coordinates.
- Maps and plans on which the objects are positioned are, for example, projected onto the projection surface according to the invention.
- the projection surfaces are not touchscreens because the acceptance and commitment of the users is reduced by using such high-tech screens or tables.
- LED screens and transparent surfaces on which information can be imaged by means of a projector are suitable as the projection surface.
- This information includes, for example, maps, blueprints, city maps, floor plans and much more.
- the content / information projected onto the projection surface can be freely selected depending on the use of the system or the device and the method. Playing fields as are known from board games or sports facilities are conceivable, as well as technical or architectural plans, plans about infrastructures, city maps and sketches.
- the information projected on the projection surface can change, e.g. by projecting maps that contain live data. Such maps are known, for example Google Maps including current traffic forecasts or maps of a control center with the display of all buses and trains currently running in local public transport or emergency vehicles from the police and fire brigade. Projection spaces are, for example, virtual environments or holograms and all types of spatial animations.
- a screen can also be used as the projection surface.
- Suitable screens are known and are based on the following technology: LED “light emitting diode”, QD-LED “quantum dots displays”, OLED “organic light emitting diode”, ⁇ LET “organic light-emitting” transistor “, SED” surface conduction electron-emitter display “, FED” field emission display “, FED” Ferro liquid display ". Suitable screens can be found easily.
- the recording unit records the position, position and / or movement of the object, i.e. the position of the object within the projection zone and, if the object is moved, also the temporal, directional change in position including acceleration.
- the recording unit then records the position, position and / or movement of the objects in the projection zone and, if the objects are moved, also their temporal, directional changes in the positions, including accelerations.
- the positioning and moving of one or more such objects in the projection zone can trigger a change in the content displayed on the projection surface and in the projection space and / or on the display unit . This is described as an example in FIG. 7.
- the aim of these recordings is not the detection of the object itself or the objects as such, but rather its or their attributes, positioning (s) and relative movement (s) within the projection zone. Furthermore, the identification of the object or objects to be moved in relation to the content / information present on the projection surface and, if appropriate, its change in position (s) as a function of time.
- Another aim of the recording can also be the collection of data with regard to the movement and, above all, the movement pattern of the users.
- interesting are the reaction speeds, the speed during the positioning and the sequence of the positioning of different objects and their or their relative position on the projection surface.
- Non-personalized information about a user that is suitable for classifying the same can also be collected.
- Such information is, for example, gender, age, culture.
- the method and the system according to the invention can also be used for collecting and collecting data on user behavior, which are then evaluated with the aid of algorithms or for training artificially intelligent systems.
- the recording is done by the recording unit. All recording devices, in particular optical reading devices, e.g. Imaging cameras, sensors or other measuring devices, such as radar and sonar or combinations thereof. Two-dimensional charge coupled device (CCD) image sensors are used in common reading devices.
- optical reading devices e.g. Imaging cameras, sensors or other measuring devices, such as radar and sonar or combinations thereof.
- CCD charge coupled device
- the recording unit is fixed over the projection zone.
- the fixation is important in order to carry out a calibration of the system necessary for recording the information to be collected.
- the recording unit is also positioned so that it captures the entire projection zone.
- the method according to the invention for collaborative solution finding comprises the following steps: [0039] Recording the position of one or more intelligent objects within the
- the decoding is the recognition of specific attributes of the intelligent objects without a code plan and independent of a grid system.
- the data coding ie the characterization of characteristics with, for example, short descriptions of their characteristics, is documented in a code plan. In the present case, this would be the pseudonymization of objects, attributes and their characteristics.
- the method essentially runs as follows: The user places or moves an object on the projection surface or in the projection space. The recording unit receives the object information that is sent via the processor to a decoding unit.
- a user can move multiple objects or multiple users can move one or more objects.
- the spatial position of the objects relative to one another is then detected, which enables a further depth of information, e.g. in relation to the metric distance or relative position of two or more objects to each other and in relation to their position in the projection zone.
- a learning process for recognizing the object starts.
- a self-learning algorithm with the aim of being able to identify the object further.
- the object is known, then further information or contents of the projection zone are obtained. This information is then checked by the system (e.g. the processor) to determine whether the content is known. If this is not the case, a learning process also starts here. If the content is known, the question arises whether an interaction field has been recognized by the system. Interaction fields are specifically defined areas within the projection area in relation to the projected content and they lie in the projection zone. Interaction fields are used to send a targeted control signal with specific content to the projection zone and display unit using the intelligent objects via data transmission signal. The system's response to the positioning of an object on an interaction field can be dynamic or non-dynamic. A dynamic change is, for example, that the display language changes or other content appears on the projection surface.
- the system e.g. the processor
- the contents and coordinates of elements of the projection surface are stored in relation to the coordinates of the positioned and known object and further processed by a processor.
- the control unit sends a data transmission signal to the processor, so that the coordinates of the object connected to the projection zone and the user interaction are stored. Each user interaction is stored line by line in the processor or in the information unit, so that finally the sum of all user interactions can be processed together.
- This data can be used for other purposes, as described for example in EP 1 983 450.
- the advantages of the system and method according to the invention lie in the fact that real objects, such as miniature figures, automobiles, drones, single-family houses, trees, etc., can be used.
- Prior art systems use coded standard objects.
- Eg Lego blocks which are color-coded using the four fields of a Lego block below and stand for a special object to be displayed.
- the Lego brick coding 1x red, 1x blue, and 2x white can be used for a single-family house, for example.
- the present method takes diverse parameters of the objects into account. For example, it recognizes the specific object of a single-family house and can then move between two similar houses, one with a pointed, one with a flat roof, or the like. differentiate. As already mentioned above, the parameters considered are e.g. Color, texture, geometry / polygons of the body, coordinates, size, haptic parameters.
- the present system and method according to the invention recognizes different sizes of an object on a continuous scale.
- Real (projected) substrates are also used instead of artificial, fixed grids.
- the method according to the invention works on the projection surface and the projection zone with a continuous, stepless recording and interaction. While known systems work in a fixed grid (eg in a 4x4 Lego brick grid), in the present method any continuous surfaces are displayed on the projection surface. This can be a map of a country, district or playground. The shapes can take any shape, do not have to follow a fixed size and square shape to be recognized.
- the present inventive method can also work with dynamic backgrounds and changes over time. What is meant by this is that the projected content on the projection surface or in the projection space can likewise represent, for example, moving cars or buses, which is taken into account in the recording and interaction with the intelligent objects for processing.
- the system can measure in the third dimension (z-axis) and it can interact with the information displayed in the projection zone and, for example, record markings where something is to be added or removed from the underlying information (for example a map) (for example, two swings become one).
- Another aspect of the invention consists in that, following the method according to the invention, the sum of the complete actions, which is referred to here as the “scenario” and, for example, represents the execution of the action sequences shown in FIG. 6, is visualized via an information system.
- scenario represents the execution of the action sequences shown in FIG. 6
- new structures and relationships can be obtained as a series of several scenarios, which means that complex issues can be presented in a reduced manner. It is conceivable to reconstruct a traffic accident on the basis of different witness statements or by going through different scenarios at major events such as escape routes or routes of the emergency services to the scene.
- Information systems for visual display can be, for example, diagrams, heat maps, superimposed maps and results of several scenarios.
- the visualization is usually carried out via an output unit (e.g. printer, monitor, smartphone, tablet).
- an output unit e.g. printer, monitor, smartphone, tablet.
- the generated information can be exported as a data record and used in other information systems.
- FIG. 1 shows a schematic representation of the system according to the invention.
- FIG. 2a shows a schematic representation of the user interface with intelligent objects and interaction of a user.
- 2b shows a schematic representation of the user interface after a user has interacted.
- 3 shows a schematic representation of the interactions in the method according to the invention, including electronic components.
- 4 shows the parameters of the objects.
- Figure 5 shows the parameters of user interaction.
- FIG. 6 shows a flow diagram for the method according to the invention.
- Fig. 7 is a top view of a possible user interface (user interface).
- FIG. 8 shows a schematic representation of the system and the device, as described in connection with FIG. 1, in which an information system is also present for evaluation.
- FIG. 9 shows a flowchart of the method according to the invention by means of which the user-generated data can be evaluated.
- Fig. 1 shows an example of the system according to the invention, which has a rollable housing 102b, which is used to hold the processor 104 and the apparatus (s) for projecting 106 of 2-dimensional content, and on which the projection surface 103 is located .
- the projection surface here is a flat, transparent plexiglass pane onto which 2-dimensional content (information) is projected from below and at the same time is the recording surface for the interventions shown in FIG.
- the system according to the invention has a display unit 109 which is controlled by a data transmission signal 107 in order to transmit content to the display unit, as shown in FIG. 2b.
- 2a shows an exemplary user interface (user interface) 201, formed from the projection surface 203, via which interaction with the user 210 is possible, in combination with the intelligent objects 208a, 208b, 208c.
- the user 210 interacts with one or more intelligent objects 208a, 208b, 208c on the projection surface 203. Interactions take place in the parameters x, y and z axis (x, y, z).
- the recording unit which is not shown here, collects the relevant information, which is then transmitted from the processor 104 to the display unit 209 by the data transmission signal 207.
- 2b shows the user interface from FIG.
- intelligent objects 208c and display unit 209 and the information 216 caused on the display unit by a user interaction are referred to as the user interaction 210 with an intelligent one Object 208c and, depending on the same intelligent object, the position of the object (x, y, z) on the content (ie the content depicted on the projection surface (e.g. city map)), by a processor (not shown here) content on the projection surface 215 and content for the display unit 216.
- FIG. 3 shows an example of the method according to the invention in relation to the user interaction shown and described schematically in FIG. 2b.
- Each user interaction 210 with the intelligent objects 308 is recorded via the recording unit 305 and transmitted to the processor 304.
- the latter processes the transmitted information and sends information about the intelligent objects to the decoding unit 313.
- the decoding unit 313 interacts with the information unit 311 with regard to the transmitted information that identifies the intelligent objects.
- the processor interacts with the control unit 312, which transmits certain information to the projection surface 303 and correspondingly other information to the display unit 309.
- the ObjectJD is a numerically unique identifier of every intelligent object.
- the Object_Label is a textual description of every intelligent object. Color describes the color of the intelligent object. Texture is the textual description of the surface structure of the intelligent object. Geometry is the textual description of the shape of the intelligent object. Size numerically describes the size of the intelligent object. Function is the description of included actions of the intelligent object. Plus / Minus describes the action of the intelligent object in such a way that it adds or reduces something to a content on the projection surface. Positive / negative describes the action of the intelligent object in such a way that the user marks content on the projection surface positively or negatively.
- ID is the numerical, unique designation of a user action with an intelligent object.
- User ID is the numerical, unique designation of a user.
- Time describes the recording of date, time with hour, minute, second, millisecond.
- X is the recording of the x coordinate on the projection surface.
- Y is the recording of the y coordinate on the projection surface.
- Z is the recording of the z coordinate on the projection surface.
- Accelerator describes the speed of user action with an intelligent object.
- Direction describes the direction of a user action with the intelligent object.
- the method begins with checking whether the present system is calibrated with the existing intelligent objects and the contents on the projection zone. If this is not the case, the calibration is started. With the user interaction, the system receives object information about the moving intelligent object. The method starts the decoding unit, which checks in the information unit whether the moving object is known. If it is not known, the decoding unit triggers a learning process. If the object is known, the content of the projection surface is transmitted. If the contents of the projection area are not known, a learning process is initiated. If the contents of the projection surface are known, the method checks the existence of an interaction field.
- the method starts the control unit, which in turn sends control signals with contents for the display unit and further contents for the projection surface. After delivery of the content for the projection surface, the process begins again with the user interaction. If no interaction field is recognized, the coordinates (x, y, z) of the object are transmitted. The method checks whether there are dynamic contents of the projection surface. If this is the case, the status of the content is transmitted on the projection surface in connection with the object data. The contents of the projection surface are then transmitted and the processor started for further processing of all available data. Processing can also be carried out using methods outside the present system. It is then checked whether there is another user interaction with which the process is started again. If this is not the case, the procedure is ended.
- FIG. 7 is a view from above of a possible user interface (user interface), ie one sees the projection surface 703 and the contents projected onto it (here a city map with traffic density display, the thickness of the lines representing the traffic volume) and the intelligent objects 708a-d in the form of traffic signs ("no passage sign").
- the intelligent object 708b was positioned on an interaction field (here a crossroads) and symbolizes a passage barrier for motor vehicle traffic, as described on the projection surface on the top left. Which is positioned on the projection surface Object-related changes in traffic flow (marked in bold lines in the figure) are immediately visible as a simulation.
- FIG. 8 shows a schematic illustration of the system and the device, as described in connection with FIG. 1, in which an information system for evaluation 818 is additionally present. Data is visualized on the output device 819, for example as a heat map 820 or diagram 821.
- FIG. 9 shows a flowchart of the method according to the invention, by means of which the generated user data and user information can be evaluated.
- the sequence shown in the flowchart follows the sequence as shown in FIG. 6. This leads to the visualized evaluation of all user interactions and their output.
- the evaluation unit is usually a processor with which the collected data are processed.
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Educational Technology (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Development Economics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Environmental & Geological Engineering (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Geology (AREA)
- Processing Or Creating Images (AREA)
Abstract
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18209693 | 2018-12-03 | ||
PCT/EP2019/083565 WO2020115084A1 (fr) | 2018-12-03 | 2019-12-03 | Procédé à base d'objet et système de recherche de solution collaborative |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3891719A1 true EP3891719A1 (fr) | 2021-10-13 |
Family
ID=64661058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19812794.6A Withdrawn EP3891719A1 (fr) | 2018-12-03 | 2019-12-03 | Procédé à base d'objet et système de recherche de solution collaborative |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3891719A1 (fr) |
WO (1) | WO2020115084A1 (fr) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6930681B2 (en) * | 2001-08-14 | 2005-08-16 | Mitsubishi Electric Research Labs, Inc. | System and method for registering multiple images with three-dimensional objects |
DE102007018562A1 (de) | 2007-04-18 | 2008-10-23 | Dialego Ag | Verfahren und Vorrichtung zur Ermittlung und Bereitstellung von Information zu einem Bild |
KR100963238B1 (ko) * | 2008-02-12 | 2010-06-10 | 광주과학기술원 | 개인화 및 협업을 위한 테이블탑-모바일 증강현실 시스템과증강현실을 이용한 상호작용방법 |
DE102012201202A1 (de) * | 2012-01-27 | 2013-08-01 | Siemens Aktiengesellschaft | Modellaufbau einer Fertigungsstätte, Werkzeug zur Kennzeichnung von Flächenabschnitten, Verfahren zur Kennzeichnung und Verfahren zum Erfassen von Flächenabschnitten |
SK500492012A3 (sk) | 2012-10-31 | 2014-05-06 | Andrej Grék | Spôsob interakcie pomocou rozšírenej reality |
EP3175614A4 (fr) * | 2014-07-31 | 2018-03-28 | Hewlett-Packard Development Company, L.P. | Modifications virtuelles d'un objet réel |
-
2019
- 2019-12-03 EP EP19812794.6A patent/EP3891719A1/fr not_active Withdrawn
- 2019-12-03 WO PCT/EP2019/083565 patent/WO2020115084A1/fr unknown
Also Published As
Publication number | Publication date |
---|---|
WO2020115084A1 (fr) | 2020-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3438901A1 (fr) | Système de base de données de scénario de conduite d'essai pour scénarios de conduite d'essais virtuels proches de la réalité | |
DE69904492T3 (de) | Computerspiel | |
WO2007147830A1 (fr) | Procédé pour la génération d'un modèle informatique tridimensionnel d'une ville | |
DE102005013225A1 (de) | Objektverfolgungs- und Situationsanalysesystem | |
DE102009041431A1 (de) | Fahrsimulationsvorrichtung, Weitwinkelkamera-Videosimulationsvorrichtung und Bilddeformierungs-/-zusammensetzungsvorrichtung | |
Sun et al. | Development and application of an integrated traffic simulation and multi-driving simulators | |
DE102018122864A1 (de) | Verteiltes Mehrbenutzer-Simulationssystem | |
EP2201524A2 (fr) | Procédé pour créer et/ou actualiser des textures de modèles d'objets d'arrière-plan, système de vidéosurveillance pour la mise en uvre du procédé, et programme informatique | |
DE102014110992A1 (de) | Registrierung einer in Cluster zerfallenden Szene mit Standortverfolgung | |
DE112018005772T5 (de) | Bestimmen und projizieren eines pfades für holografische objekte und einer objektbewegung unter zusammenarbeit von mehreren einheiten | |
DE102020127855A1 (de) | Sicherheitssystem, automatisiertes fahrsystem und verfahren dafür | |
WO2020115084A1 (fr) | Procédé à base d'objet et système de recherche de solution collaborative | |
EP3711392B1 (fr) | Procédé et dispositif de détermination de position | |
EP2940624B1 (fr) | Modèle virtuel tridimensionnel d'un environnement pour applications destinées à la détermination de position | |
Hehl-Lange et al. | Virtual environments | |
Xing et al. | Optimization of Computer Aided Design Technology Based on Support Vector Machine in Landscape Art Design | |
EP1725986A1 (fr) | Dispositif d'analyse de mouvement en temps reel | |
DE102021104110A1 (de) | Verfahren zur Parametrierung einer Bildsynthese aus einem 3D-Modell | |
DE112020000751T5 (de) | Simulierte perspektivische überkopfbilder mit entfernung von hindernissen | |
EP4172729A1 (fr) | Procédé d'affichage d'un objet virtuel | |
DE102019125612A1 (de) | Verfahren zur computerimplementierten Simulation eines optischen Sensors in einer virtuellen Umgebung | |
DE102019207090A1 (de) | Verfahren zum Bereitstellen einer Objektverfolgungsfunktion | |
DE102014110995A1 (de) | Registrierung einer in Cluster zerfallenden Szene mit Scan-Anforderung | |
AT525369B1 (de) | Testumfeld für urbane Mensch-Maschine Interaktion | |
EP3384469A2 (fr) | Procédé de représentation d'un environnement de simulation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210705 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20230912 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20240123 |