EP3332284A1 - Procédé et dispositif d'acquisition de données et d'évaluation de données d'environnement - Google Patents

Procédé et dispositif d'acquisition de données et d'évaluation de données d'environnement

Info

Publication number
EP3332284A1
EP3332284A1 EP16751273.0A EP16751273A EP3332284A1 EP 3332284 A1 EP3332284 A1 EP 3332284A1 EP 16751273 A EP16751273 A EP 16751273A EP 3332284 A1 EP3332284 A1 EP 3332284A1
Authority
EP
European Patent Office
Prior art keywords
data
image
selection
evaluation
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP16751273.0A
Other languages
German (de)
English (en)
Inventor
Eberhard Schmidt
Tom Sengelaub
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
SensoMotoric Instruments Gesellschaft fuer Innovative Sensorik mbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SensoMotoric Instruments Gesellschaft fuer Innovative Sensorik mbH filed Critical SensoMotoric Instruments Gesellschaft fuer Innovative Sensorik mbH
Publication of EP3332284A1 publication Critical patent/EP3332284A1/fr
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Definitions

  • the invention is based on a method for data acquisition of environmental data of an environment of a user by means of a scene image recording device and for evaluation of the acquired environmental data by means of an evaluation device. Furthermore, the invention is based on a corresponding device with a scene image recording device for data acquisition of environmental data of an environment of a user and with an evaluation device for evaluating the acquired environmental data.
  • augmented reality systems such as, for example, augmented reality glasses
  • computer-generated objects can be superimposed on the reality perceived by the user through such glasses, which in particular can be related to objects of the real environment.
  • additional information about objects in the environment can be displayed through the glasses. To make this possible, the pictures taken by the scene camera are evaluated and searched for existing or specific objects. If such objects are found in the image recordings, the corresponding information, through which Glasses are faded in.
  • the eyetracking data can be compared with the recorded scene image data in order, for example, to determine where in the surroundings the user is currently looking, in particular also to which object in the surroundings.
  • registration methods can also be used here, which make it possible to image an image to a reference image, which was taken for example from another perspective. Such registration methods can be used to easily accumulate, for example, gaze direction data over time as well as over multiple users by transferring them to a common reference image.
  • certain objects, significant areas, or significant points can be defined in the reference image, such as, for example, patterns, edges or intersections of edges which are sought and identified in the evaluation of the scene image therein.
  • a transformation can be derived that maps the scene image onto the reference image. This transformation can, for example, be used in the same way in order to map the point of view of the user in the scene image onto the reference image.
  • the object of the present invention is therefore to provide a method and a device for data acquisition and evaluation of environmental data, which allow a reduction of data volumes and at the same time keep the loss of relevant data as low as possible.
  • the inventive method for data acquisition of environmental data environment of a user by means of a scene image recording device, such as a scene camera, and for evaluation of the detected environment data by means of an evaluation is characterized in that a spatial and / or temporal selection is made, the detection of the environmental data by means of the scene image recording device and / or a transmission of the environmental data from the scene image recording device to the evaluation device and / or an evaluation of the environmental data by the evaluation device.
  • This selection is made as a function of at least one detected and temporally variable first parameter, and in particular controlled or regulated.
  • a selection it is advantageously possible to categorize data, for example, in terms of their relevance, specified by the first parameter recorded.
  • Several temporal and / or spatial selections can also be made here, for example a first selection for environment data with the highest relevance, a second selection for environment data with medium relevance, a third selection for environment data with little relevance, etc.
  • Various reduction measures can then advantageously be limited to non-relevant or less relevant data, so that the total amount of data can be reduced without having to give up relevant information.
  • such a selection can advantageously be made on the entire data path from acquisition to evaluation, so that numerous possibilities for data reduction are provided.
  • the selection concerning the detection of the environment data may specify which one of the image data taken by the scene image pickup device from an image sensor of the Scene pickup device are read out and which not or how often and at what rate.
  • the selection concerning the transmission of the environment data may specify which of the acquired data is transmitted, which is not, or of which quality, for example compressed or uncompressed.
  • a selection related to the evaluation may determine which of the data is to be evaluated, which not, or which one first.
  • a time selection may specify, for example, when data is collected, read, transmitted, or evaluated, for example, when not, or at what rate.
  • the invention thus enables a significant reduction of the baseband width, whereby the gain at wide latitude can again be converted into faster frame rates, faster processing, lower latency times, lower energy or processor power requirements, simpler interfaces and less expensive components.
  • the scene image recording device can generally be configured as one or more cameras, for example as a classic 2D sensor, as a so-called event-based sensor and / or as a 3D camera (for example TOF, depth map, stereo camera, etc.).
  • the environmental data selected according to the selection are treated in a first predeterminable manner, in particular detected with the scene image recording device and / or read from the scene image recording device and / or transmitted to the evaluation and / or evaluated by this, and not according to the environment data selected selection not treated or treated in at least a second of the first different predeterminable manner, in particular again recorded and / or read and / or transmitted and / or evaluated.
  • advantageously different reduction measures can be applied to the selected as well as to the non-selected environmental data.
  • the selected environment data can be captured, transmitted and evaluated with maximum quality, while the non-selected environmental data, for example, not at all be used, which can reduce the total amount of data in a particularly effective manner, or at least recorded with lower quality, transmitted or evaluated with lower priority, which advantageously still allows use of these data while reducing data simultaneously.
  • the environment data not selected according to the selection is reduced, in particular wherein the environmental data selected according to the selection is not reduced.
  • a spatial selection such a reduction can take place, for example, by not recording non-selected image areas, transmitting them to the evaluation device or evaluating them, or by compressing non-selected image regions, reducing them structurally, for example in terms of their color depth. or similar.
  • a spatial selection for example, in the presence of multiple cameras of the scene image recording device by a choice of one of these cameras for data acquisition of the environmental data done. Alternatively, only selected infomation levels are recorded, read out, transmitted or processed from individual or multiple cameras. This may concern eg reduction to edges, depth information, gray values, certain frequency ranges.
  • a spatial selection has the consequence, in particular, irrespective of whether it relates to the recording, transmission or evaluation, that an image reduced in terms of its data volume compared to the originally recorded image is provided for the purpose of evaluation.
  • the reduction can be accomplished, for example, by the fact that image data are either not recorded at all or are recorded at low frame rate, transmitted and / or evaluated.
  • it is advantageously possible to reduce the total amount of data, whereby the fact that this reduction is preferably restricted to the non-selected environmental data means that the loss of information relative to relevant data can be kept as low as possible.
  • the environment data not selected according to the selection is compressed.
  • the environment data not selected according to the selection may be filtered.
  • Data compression can be achieved, for example, by binning or dyeing
  • color filters can be used during filtering, for example to reduce the color depth. In this way, the unselected data sets can be reduced without losing them completely. In the event that the selected data still does not contain the desired information, it can still be used for the non-selected data.
  • a rate relating to the detection and / or the reading and / or the transmission and / or the evaluation of the environment data not selected according to the selection is reduced Rate relating to the acquisition and / or reading and / or transmission and / or evaluation of the environment data selected according to the selection.
  • the image acquisition rate can be kept low and thus also the data to be recorded, amount of data to be transmitted and ultimately evaluated - quasi in sleep mode for data acquisition.
  • the transmission rate or the evaluation rate can be reduced accordingly.
  • the environmental data not selected according to the selection is assigned a lower temporal priority, in particular during the evaluation, than the environmental data selected according to the selection.
  • This variant saves a great deal of time in the evaluation, since in image areas the analysis can be started, for which the probability is high that the information or objects sought there are to be found.
  • a 3D scene camera as a scene image recording device, it is also conceivable that only part of the so-called depth map is utilized as a reduction measure.
  • This part can in turn be determined or selected as a function of an eye parameter of the user as well as in dependence on one or more image characteristics, for example on the basis of the determined line of sight and its intersection with a user. object, or on the basis of vergence, the state of accommodation of the eye, etc.
  • How to reduce the environment data not selected according to the selection may be either predetermined or determined depending on one or more other parameters. For example, this can be done depending on the image characteristics. In the course of a preliminary analysis of a captured image, it can be determined, for example, whether objects or further objects are present in the image around the area of the viewpoint. For example, if the user is looking at a particular point on a white wall, it can be determined based on the pre-analysis that the data not selected according to the selection should not be further processed, for example, instead of being transmitted or evaluated only in a compressed manner.
  • the type of reduction of the environmental data can also take place as a function of specifications relating to a maximum amount of data or data rate during the transmission and / or evaluation, so that a compression type is selected that meets these specifications. Also, the type of reduction may be made depending on an application or, in general, the tick of data analysis and processing. If, for example, color information plays a subordinate role and good contrasts or high resolution are required, then, for example, color filters can be selected as a reduction measure instead of using compression methods which reduce the overall resolution.
  • the described selection parameters for the reduction measures are advantageous in particular when applied to structural and / or spatial reduction measures.
  • the environment data selected according to the selection can be enriched by additional data of a different from the scene image recording device data source.
  • Such enrichments may, for example, predefined emphasis, in particular color or in terms of the contrast represent an annotation by the user, for example by voice input, an annotation of biometric or other user data, and / or an annotation or combination of performance data based on an action, task or task performed by the user Application.
  • these can advantageously be evaluated with additional additional data, and evaluated, for example, just with reference to the additional information.
  • the data source can represent, for example, a further detection device, for example a voice detection device, a gesture detection device, a heart rate monitor, an EEG, or also a memory in which the additional data are stored.
  • the at least one first detected parameter represents an eye parameter of at least one eye of the user.
  • an eye tracker can serve to capture the eye parameter.
  • the eye parameter may in this case represent a viewing direction and / or a viewpoint and / or a gaze pattern, in particular an eye movement and / or eye tracking movement and / or a temporal viewpoint sequence, and / or an eye opening state and / or information about a distance of the viewpoint from the user , such as a convergence angle of the user's two eyes.
  • the invention is based on the recognition that in particular objects or significant points, such as corners or edges of objects, attract the attention of the eye, in particular in contrast to, for example, color-homogeneous and non-structured surfaces. For example, if the user looks around in his environment, his eyes automatically look at salient points and areas, such as corners, edges, or objects in general. This can be used in a particularly advantageous manner for object detection in the user's environment since it can be assumed that relevant objects, points or areas are very likely to be located at the location of the environment which the user is currently looking at.
  • the detected viewing direction can, for example, be matched with the environmental data recorded by means of the scene image recording device in order to determine the user's point of view in the corresponding scene image.
  • an area around this determined viewpoint in the scene image can be spatially selected and thus treated as a relevant image area in the first predeterminable manner, while image areas outside this area can be classified as irrelevant or less relevant and thus not or in the second predeterminable manner can be treated to reduce the amount of data.
  • information about a distance of the viewpoint to the user such as from a convergence angle of the two eyes or a state of accommodation of the at least one eye determined, can advantageously be used to make a spatial selection, especially for making a three-dimensional data selection.
  • a 3D scene camera as a scene image recording device only a part of the so-called depth map can be utilized, in particular only for the selected three-dimensional area.
  • the viewpoint of the user in the scene image in particular as a 2D viewpoint or 3D viewpoint, can thereby be determined in particular by the scene image recording device and the eye tracker working synchronously to determine the viewing direction and / or the viewpoint.
  • the eyetracker can take an image of an eye of the user at the same time and determine therefrom the viewing direction to which the scene image recording device makes a corresponding image recording of the environment for acquiring the environmental data.
  • the scene image recording device is preferably also arranged and / or formed such that the field of view of the scene image recording device for the most part intersects with the field of view of the user or at least one possible field of view of the user, and in particular completely covers this.
  • the scene image recording device it is particularly advantageous, for example, for the scene image recording device to be part of a device carried on the head, which also includes the eye tracker.
  • the scene image recording device is thus also advantageously moved so that it is always advantageously oriented in the direction of the field of vision of the user.
  • Conceivable would be embodiments in which the scene image recording device is not arranged on the head of the user or other body part of the user but, for example, stationary.
  • the scene image recording device may include, for example, one or more cameras, preferably then as large as possible Cover the room angle area of a room.
  • a position of the user or of his head in relation to the scene camera coordinate system could also be determined by the scene image recording device, and the correspondingly determined viewing direction could also be converted into the scene camera system.
  • the invention allows the use of the available bandwidth for the information essential to the user.
  • the viewing direction corresponding to an image acquisition of the scene image recording device at a specific time does not necessarily have to be determined on the basis of image data that was recorded by the user's eye at the same time.
  • the viewing direction and / or the resulting viewpoint can, for example, also be predicted by the eye on the basis of one or more temporally previously recorded image recordings, for example by means of Kalman filters or other methods.
  • the idea underlying the viewpoint prediction consists in the fact that eye movements or eye movements can be subdivided into saccades and fixations, in particular moving and non-moving fixations.
  • a saccade represents the change between two fixations. During such a saccade, the eye does not receive information, but only during a fixation.
  • Such a saccade follows a ballistic eye movement, so that, for example, by detecting initial values of such a saccade, such as initial velocity, initial acceleration and their direction, the time and location of the end point of such a saccade that ultimately empties into fixation will be determined and predicted leaves.
  • Such viewpoint forecasts can advantageously also be used in the present case in order, for example, to predict the viewing direction or the viewpoint for a time at which the scene image recording device then records a corresponding environmental image.
  • the end point can be determined and used for the next temporal and / or spatial selection at the end of the saccade. This advantageously also latencies can be shortened.
  • a spatial selection can then be made by moving the area to be selected or a second spatial area to the one or more possible viewpoints in a prediction window.
  • the viewpoint or the viewing direction can be advantageously used to select relevant and irrelevant data, but also, for example, a gaze pattern or eye movements or temporal Magnoliayakab- consequences or characteristic eye movement sequences, such as the saccades and fixations just described.
  • Such gaze patterns can preferably be used particularly advantageously for a temporal selection since, as described, a user does not record any environmental information during a saccade.
  • the viewpoints in an image recorded by the scene image recording device during a saccade are also less suitable in order to provide information about the presence of relevant objects, points or areas in the image.
  • viewpoints attributable to a fixation in the scene image are very well suited to providing an indication of the presence of relevant image areas, objects or points. Since these two states can be distinguished and recorded on the basis of characteristic eye movements for the saccade and fixation, these states are particularly well suited for making a temporal selection with regard to relevant environmental data.
  • provision may be made for image recordings of the surroundings to be made only when a fixation is detected.
  • the image acquisition rate during a non-moving fixation compared to a moving fixation can be reduced, for example, limited to one or a few during a detected or predicted fixation phase, since during a non-moving fixation, the user's point of view with respect to its environment not changes.
  • Look-up movements can also be detected and thus a moving object of particular importance can be indexed. It is also possible to draw conclusions about the meaning of certain image contents from pupil reactions and thus support selection for recording, transmission or analysis. These selection and reduction measures described for the image recordings can also be used in the same way for additionally or alternatively for the readout of the image data, the transmission and their evaluation.
  • the eye-opening state which is also particularly suitable for making a temporal selection for the acquisition, reading, transmission and / or evaluation of the environmental data. Since the eye, for example in the case of a lid closure, can not provide information on relevant image areas, it can be assumed that be seen that only then images of the environment using the scene image recording device are made, or only then these data are transmitted or evaluated when the eye is open, while when a lid closure is detected, on image recordings, their transmission or evaluation omitted or reduced, or compressed, or reduced by other reduction measures.
  • a detected eye parameter thus provides numerous advantageous information about where and when relevant information is present in the environment of a user or in the corresponding images recorded by the scene image recording device. This advantageously makes it possible to make or also to control the spatial and / or temporal selection so that on the one hand data volumes can be reduced particularly effectively and on the other hand the loss of relevant data can be reduced to a minimum.
  • the at least one detected parameter represents an image characteristic of an image recorded during the detection of the environmental data by means of the scene image recording device and / or a change of the image characteristic with respect to at least one previously recorded image. It is particularly advantageous, for example, to use the image content of the recorded image or the change in the image content of the recorded image with respect to a previously recorded image as the at least one first parameter, since if the image content is not or only slightly opposite to one previously recorded image, for example, if previously determined results can be used without having to evaluate the newly taken image. For example, it can also be provided that as long as the image content does not change significantly, images are recorded, transmitted or evaluated at a lower rate or frequency, whereby data can again be saved enormously.
  • This image content comparison can be carried out, for example, in the course of preprocessing, in particular before the image data is transmitted to the evaluation device and evaluated by the latter.
  • an image content comparison can be performed in a much less time-consuming and computationally intensive manner.
  • Such an image content The same can apply here to the entire respective recorded scene image or even only to a subarea thereof, such as, for example, again to a previously spatially selected area around the determined viewpoint of the user.
  • it can then be decided, for example, whether the recorded image data is ever transmitted to the evaluation device or evaluated by it.
  • image characteristics which can be used as the at least one first parameter are, for example, also spatial frequencies in the recorded scene image, a contrast or contrast curves, the presence of objects, regions or significant points in the image, a number of objects present in the image, regions or dots also the arrangement of existing objects in the image, areas, points, structures, etc.
  • image parameters or image characteristics can be advantageously used to make particular a spatial selection or to control, which will be explained later in more detail.
  • the at least one first parameter represents a user input or a detected user characteristic or any other external event from other signal sources or input modalities.
  • Such parameters may alternatively or additionally also be used to, for example, the recording, the Transmission or the transfer and / or the analysis or evaluation of individual images or image sequences trigger and in particular also to control or regulate.
  • conventional controls such as buttons, mouse, and so forth may be used, gesture detection may be used, or the like. This allows the user, for example, to actively signal when interesting or relevant objects are in his field of view or look at them.
  • User characteristics may be detected, for example, by detecting movements of a user, gesturing, EEG signals, or the like. Such characteristics can also provide information about whether interesting objects are currently in the user's field of view or not. It is particularly advantageous to provide such parameters for a temporal selection of relevant data.
  • the invention determines the spatial selection, which area of the environment than the environment data, in particular to the first predeterminable manner, by means of the scene image recording device is detected and / or read from the scene image pickup device and / or is transmitted to the evaluation and / or evaluated by the evaluation. On the entire data path from acquisition to evaluation, it is thus advantageously possible to spatially select data, thereby characterizing the relevant data.
  • the spatial selection is made in such a way depending on a detected viewpoint of the user that the area comprises the viewpoint.
  • the viewpoint is particularly suitable in order to be able to select between relevant and non-relevant or less relevant data.
  • the viewpoint is particularly well suited as the detected parameter, depending on which the spatial selection is made and possibly also time-controlled.
  • the size of the area is predetermined, that is, not variable or constant.
  • the user's point of view may be determined in a corresponding image of the scene image pickup device, and then an area determined in terms of its size may be selected around that viewpoint as the relevant data.
  • This area can be predetermined for example by a fixed radius around the viewpoint or as a fixed image portion with respect to the entire recorded scene image. This represents a particularly simple, less computation-intensive and above all time-saving possibility for selecting and defining the area with the relevant image data.
  • the size of the area is defined or controlled as a function of at least one second parameter. This provides particularly flexible options with regard to the relevant image data. This allows, for example, an adaptive adaptation in order to better distinguish between relevant and non-relevant data around the viewpoint.
  • an image characteristic of an image recorded during the detection of the environmental data by means of the scene image recording device and / or a measure of one is suitable as these second parameters
  • Accuracy and / or dynamics of the determined viewpoint of the user and / or at least one device parameter such as transmission quality, latencies or performance of the processing device, a device comprising the scene image recording device and / or the evaluation device, and / or a size of a in a predetermined proximity of the viewpoint or even in at least partially overlapping with the user's viewpoint object in a captured in the detection of the environmental data by means of the scene image recording device image.
  • the second parameter represents, for example, the image characteristic
  • the characteristic of the image content such as spatial frequency around the viewpoint, number or uniqueness of the objects or relevant points, object clusters, feature clusters, contrast around the viewpoint or detected Objects behind, before or around the viewpoint, are used to set or control the size and also the boundary of the area to be determined.
  • This makes it possible, for example, to define the area in such a way that it is always possible to cover an entire object which the user is looking at, or for example always a contiguous area, or from the viewpoint all the way to the next edge (start burst), and so on.
  • the size of the area depending on a size of an object on which the viewpoint is or which is at least in a predetermined proximity of the viewpoint of the user is set or controlled, in particular so that always the whole Object or a group of objects is selected with. This advantageously increases the probability that the relevant information to be acquired is also completely covered by the selected area. It is also particularly advantageous to provide the measure for the accuracy of the determined viewpoint of the user as the second parameter. If the eye-tracking quality is poor, for example, it may be that the point of view determined deviates greatly from the actual point of view of the user.
  • the area to be selected around the viewpoint is particularly advantageous to increase the area to be selected around the viewpoint with lower accuracy of the determined viewpoint, in contrast to the case of higher accuracy of the determined viewpoint.
  • the accuracy of the determined viewpoint can be calculated or estimated by known methods, such as the gaze quality of the image taken by the eye tracker, the temporal spread of viewpoint values, and so forth.
  • the Dynamics of the viewpoint can be advantageously taken into account in controlling the size of the area. If the viewpoint has a high degree of dynamics over its time course, ie if it moves or jumps within a short time within a large surrounding area, the size of the area to be selected can also be selected to be correspondingly larger. Also, various other device parameters may be considered in determining the size of the area.
  • the region to be selected may be selected to be smaller in size to reduce the amount of data to be transmitted or evaluated, and thus to shorten or keep latencies within predetermined limits .
  • the range can be selected to be correspondingly larger.
  • performance parameters can relate, for example, to both the transmission and the evaluation as well as various other components of the device. It can also be provided, for example, that the user himself or another person can specify to the system this second parameter for determining the size of the area. Thus, the user himself can set his priorities regarding time efficiency or data reduction, and quality of the result. The larger the range chosen, the more likely it is that all relevant information is included in that range, while the smaller that range is chosen, the less data must be read, transmitted, and / or evaluated.
  • the temporal selection determines when, in particular in the first predeterminable manner, an area of the environment is detected as environmental data by means of the scene image recording device and / or read from the scene image recording device and / or is transmitted to the evaluation and / or is evaluated by the evaluation device.
  • the spatial selection it is thus advantageously possible to select data with regard to the entire data path.
  • the temporal selection is made as a function of the at least one first parameter, that only then or in the first predeterminable manner, for example with increased temporal rate, uncompressed, unfiltered, and so on, images and / or Image sections are recorded with the scene image recording device and / or recorded image data are read out and / or transmitted to the evaluation device and / or evaluated by the evaluation device if the at least one first parameter fulfills a predetermined criterion.
  • data selection therefore, there is the possibility of either not treating any further selected data, in particular of not even detecting them, or of reading them in a reduced manner, such as by compression or filtering or less frequently, and / or or to process.
  • a temporal control of the selection can thus advantageously be carried out, so that in turn data sets can be reduced by not treating data classified as less relevant or at least treating it with lower quality due to the reduction, without sacrificing quality influence relevant data.
  • the predetermined criterion is that it is detected that a gaze pattern and / or an eye movement and / or a gaze sequence and / or a fixation of the eye, as the at least one first Parameter is detected and / or forecast, has a predetermined characteristic.
  • the predetermined criterion may be that, based on the opening state of the eye, the at least one first parameter is detected and / or predicted that the at least one eye of the user is open.
  • the predetermined criterion can also consist in detecting, on the basis of the image characteristic as the at least one first parameter, that a change of at least a part of the image content with respect to at least a part of the image content of at least one temporally previously recorded image is a predeterminable measure exceeds. If the image content does not change or does not change significantly, the newly acquired image likewise contains no additional, new or relevant information, so that the amount of data can advantageously also be reduced thereby.
  • the predetermined criterion can also be that a user input is detected as the at least one first parameter. This allows the user to tell the system itself when particularly relevant information is in its field of view. Alternatively or additionally, it may also be provided that the user provides information about the presence of relevant information in his field of view in a passive manner, for example by recognizing a predetermined user state as the at least one first parameter on the basis of a user characteristic, such as EEG signals / or is forecasted. User behavior, such as gestures or the like, can also be analyzed to provide information about the presence of relevant information in the user's environment.
  • a preprocessing of the environmental data is carried out, at which the selection is made and / or at which the first predeterminable manner is determined, the environmental data selected according to the selection and / or in which the second way is determined which one of the environment data not selected according to the selection is assigned.
  • Such preprocessing is particularly advantageous in particular if the spatial and / or temporal selection is to be made as a function of an image characteristic of the environmental data acquired as image.
  • a first selection already performed before the preprocessing of the environment data so that only selected environment data are subjected to preprocessing at all.
  • the invention relates to a device with a scene image recording device for data acquisition of environmental data of an environment of a user and with an evaluation device for evaluating the acquired environmental data.
  • the device is designed to provide a spatial and / or temporal selection relating to a detection of the environmental data by means of the scene image recording device and / or a transmission of the environmental data from the scene image recording device to the evaluation device and / or an evaluation of the environmental data by the evaluation device in dependence of at least one detected, temporally variable first parameter.
  • the device further comprises an eye-tracking device which is designed to detect the at least one first parameter.
  • eye-tracking device which is designed to detect the at least one first parameter.
  • eye parameters such as viewing direction, viewpoint and so on are particularly suitable for selecting the image data recorded with the scene image recording device into more or less relevant data.
  • the device comprises a device that can be worn on the head, for example an augmented reality goggle, wherein the head-portable device has the scene image recording device and at least one display device, and preferably the eye tracking device.
  • Augmented reality glasses allow additional information and objects to be superimposed and superimposed on the real environment.
  • the evaluation device can also be integrated, for example, in the device which can be worn on the head, or else be provided as an external evaluation device, for example as a computer, computer, etc., in which case the device which can be worn on the head is designed in accordance with FIG Selection of selected data in the first pre-definable manner to the external evaluation device, for example, wired or wireless to transmit and corresponding to the not selected according to the selection data in the second predeterminable manner or not at all.
  • the time expenditure can thus be significantly reduced even in a subsequent video analysis of the scene video or video images.
  • the invention can be advantageously used in a variety of fields of application such as mobile eye tracking to reduce the bandwidth of the scene video by limiting the area of the foveal or extended foveal point of view of the user, in augmented reality applications also to reduce the bandwidth.
  • the area of the scene to which an overlay, that is, information or objects to be overlaid must be registered can be reduced in size.
  • objects can be visually marked, to limit the shooting with the scene camera on it.
  • the invention also offers advantageous and numerous possibilities for use in automatic scene video analysis, in which, for example, only one area is transmitted around the viewpoint in the scene and, for example, registered with a reference, that is to say with a reference video or a reference picture.
  • a clear reduction of the baseband is possible.
  • image detail content and recording can be controlled with certain available control criteria, so that the amount of data compared to the original video is significantly reduced, without losing the respective critical, that is relevant information.
  • Fig. 1 is a schematic representation of a device for data acquisition of
  • FIG. 2 shows a schematic cross-sectional illustration of the device for data acquisition and evaluation according to an embodiment of the invention
  • FIG. 3 shows a schematic representation of acquired environmental data in the form of a scene image to illustrate a method for data acquisition and evaluation according to an exemplary embodiment of the invention
  • FIG. 4 shows a flow chart for illustrating a method for data acquisition and evaluation according to an exemplary embodiment of the invention
  • 5 shows a flow chart for illustrating a method for data acquisition and evaluation, in particular with a spatial selection of environmental data, according to an exemplary embodiment of the invention
  • FIG. 6 shows a flowchart to illustrate a method for data acquisition and evaluation, in particular with a temporal selection, according to an exemplary embodiment of the invention.
  • 1 shows a schematic representation of a device 10 for data acquisition of environmental data of an environment 12 of a user and for their evaluation according to an exemplary embodiment of the invention.
  • the device 10 comprises a head-worn device which can be embodied here as an eyewear 14, for example, which can be designed as augmented reality glasses or data glasses, or can also be designed as conventional eyeglasses with or without eyeglass lenses.
  • the wearable device could also be formed in any other way, for example as a helmet or the like.
  • These spectacles 14 furthermore comprise a scene image recording device embodied as a scene camera 16, which is arranged at the front and in the center of the spectacles 14.
  • the scene camera 16 has a field of view 18, which is to be illustrated in FIG. 1 by dashed lines. Regions of the environment 12 within the field of view 16 of the scene camera 14 can be imaged on the image sensor of the scene camera 16 and thus detected as environmental data.
  • This field of view 18 is preferably designed such that it at least partially overlaps with a field of view of a user wearing the glasses 14, preferably for the most part or even completely.
  • the glasses 14 comprise an eye tracker with two eye cameras 20a, 20b, which in this example are arranged on the inside of the frame of the spectacles 14, so that they can each take pictures of a respective eye of a user wearing the spectacles 14 in order to record these image recordings for example, for sight line detection, viewpoint detection, gaze pattern recognition, detecting eyelid closure, and so on.
  • the spectacles 14 have an optional preprocessing device 23, of which preprocessing steps of the image recordings, in particular of the scene camera 16, explained in greater detail later, can be performed.
  • the device 10 has an evaluation device 22, which in this example represents an out-of-the-beam device.
  • the evaluation device 22 can also be coupled with the glasses 14 via a communicative connection 25, for example a data line, wirelessly or bound by a strap.
  • a communicative connection 25 for example a data line, wirelessly or bound by a strap.
  • the scene camera 16 initially takes a scene video and the corresponding data are first stored in a memory (not shown) of the glasses 14, and only at a later time the communicative connection 25 between the glasses 14 and the evaluation device 22 is prepared to read out the data stored in memory and to the evaluation device 22 to transmit.
  • this evaluation device 22 could also be integrated into the spectacles 14.
  • the spectacles 14 can optionally have displays 21, by means of which, for example, additional digital information or objects of the view of the surroundings 12 can be superimposed superimposed.
  • Such a device 10 can now be used for a variety of applications.
  • a scene video can be recorded by means of the scene camera 16 while a user is wearing the glasses 14 and moves, for example, in the surroundings 12. Meanwhile, the eye cameras 20a, 20b can take pictures of the user's eyes to determine the line of sight corresponding to the respective pictures of the scene video.
  • the acquired data can now be transmitted, for example, to the evaluation device 22, which evaluates the image data of the scene camera 16 and of the eye cameras 20a, 20b.
  • a scene video can be created by marking the user's point of view at the respective time.
  • such a reference recording can also represent an image recording of the environment 12, which was also made by means of the scene camera 16 or else by means of another image recording device.
  • the image recordings of the scene camera 1 6 must be registered with the reference image. This can be done, for example, by marking significant points or objects in the reference image which are searched for and identified in the respective images of the scene video in order to derive, for example, a transformation that represents a respective scene. picture on the reference picture. In this way, the viewpoints of the user can then be mapped to the one reference image at the respective times.
  • the points of view that are respectively current for a first user with respect to his surroundings can be determined and a second user who views the same environment, but from a different perspective, these points of view of the first user via the display 21 of his spectacles 14 at the corresponding location , especially in real time, so that the second user can track which objects in the environment the first user is currently looking at.
  • these methods usually very high amounts of data accumulate, so that both the data transmission and their evaluation is extremely time-consuming and computationally intensive.
  • the invention now advantageously makes it possible to reduce these enormous amounts of data without losing essential information, which would greatly reduce the quality of such a method.
  • This is accomplished by making a selection related to the environment data.
  • This selection can take place during data acquisition by means of the scene camera 16, when reading out the data from the scene camera 16 or its image sensor, when transmitting the data to the evaluation device 22 as well as during the evaluation by the evaluation device 22.
  • the device 10 may comprise at least one control device, which may for example be part of the preprocessing device 23, the evaluation device 22, the scene camera 16 and / or the eye tracker, or may also be designed as a further separate control device to the data acquisition, the reading of the To control data, data transmission and their evaluation as well as making the selection.
  • the one or more control devices may comprise a processor device which is set up to carry out one or more embodiments of the method according to the invention.
  • the processor device can have at least one microprocessor and / or at least one microcontroller.
  • the processor device can have program code which is set up to execute the embodiments of the method according to the invention when executed by the processor device.
  • the program code may be stored in a data memory of the processor device.
  • the selection is made depending on at least one detected parameter.
  • This parameter serves to categorize or estimate the relevance of the environment data. Numerous parameters come into consideration on the basis of which such a categorization can be made.
  • the viewing direction or the point of view of the user with respect to a recorded scene image This is due to that the eye automatically targets significant areas, points, or objects in its surroundings.
  • the viewpoint in the image can be advantageously used to find the most likely location of significant points, objects or areas in the scene image.
  • FIG. 2 shows a schematic representation of the spectacles 14 of the device 10 in cross-section and an eye 24 of a user looking through the spectacles 14.
  • the scene camera 16 takes an image 26 (see FIG. 3) of a surrounding area 28, which is located in the field of view 18 of the scene camera 16.
  • the eye tracker with the eye camera 20a takes one or more pictures of the eye 24 of the user, on the basis of which the viewing direction 30 of the eye 24 at the time of the acquisition of the surrounding image 26 is determined.
  • the viewpoint 32 in the recorded scene image 26 can be calculated.
  • FIG. 3 shows a schematic representation of such a captured scene image 26, with respect to this calculated viewpoint 32.
  • 34a and 34b designate objects in the scene image 26.
  • an area around the viewpoint 32 can now be selected as relevant image data since the probability of finding an object, a significant point or area in the vicinity of the viewpoint 32, is the highest.
  • the size of this area can be fixed by a fixed parameter, such as image proportion or radius around the viewpoint 32, as illustrated in this example in Fig. 3 for the area 36a. Even better results can be achieved, for example, if this range can be adapted adaptively, for example by means of a feedback loop.
  • the characteristic of the image content such as the spatial frequency around the viewpoint 32, the number or uniqueness of features or feature clusters, objects, more interesting regions, contrast intensity around the viewpoint 32, can be considered for such an adaptive adaptation.
  • the examination of the image characteristic in the region of the determined viewpoint 32 can be carried out, for example, in the course of preprocessing by the preprocessing device 23. This makes it possible to detect contiguous areas or objects, such as the object 34a in this example, and the area 36b to be selected may then be chosen so that always an entire object 34a being viewed, or a contiguous area, or from the viewpoint 32 all to the nearest edge, is encompassed by the selected area 36b. Such a selection now allows a data reduction in a particularly advantageous manner.
  • the selected region 36a, 36b is received by the scene camera 16, read out of it, transmitted to the evaluation device 22 or evaluated by the evaluation device 22.
  • different image regions of the scene image 26 are further treated by the selected region 36a, 36b in a reduced manner, which are detected, for example, at a lower resolution and transmitted or evaluated in compressed form by compression algorithms.
  • the image data in the immediate vicinity of the viewpoint 32 can be treated in an unreduced manner in order to achieve maximum quality, while the data outside this first region 36a and within the second region 36c are treated in a reduced manner, for example compressed or with lower priority while the remaining data outside the area 36c and area 36a are not treated at all or are treated in a much reduced manner compared to the data in area 36c.
  • image data can be assigned to a plurality of relevance classes, whereby preferably the further image regions are removed from the viewpoint 32, the less relevance is assigned. Also, this can provide various degrees and / or modes of compression based on human vision. For relevance-dependent data reduction, however, not only a spatial selection can be used, but, for example, a temporal selection of data can also be made.
  • Such temporal selection means for example, that if a plurality of images 26 are taken as a sequence of images, depending on the time of their acquisition, they may be classified as relevant or less relevant and may be selected in terms of time.
  • Such a temporal selection can be event-driven, for example. If, for example, eyelid closure of the user is detected or if for other reasons the viewpoint 32 can not be determined for a specific time or period, the image data captured during this period can be classified as less relevant, or even no image recordings during a time less relevant period. It can also be provided that image data are only classified as relevant if they have a significant change in content compared to a previously recorded image.
  • advantageously temporal and spatial selection can also be combined as desired.
  • image data outside a spatially selected area 36a, 36b are read out, transmitted or processed at a lower frequency or rate than the image data assigned to the selected areas 36a, 36b.
  • 4 shows a flow chart for illustrating a method for environmental data acquisition and evaluation, which combines, in particular, spatial and temporal selections, according to an exemplary embodiment of the invention.
  • images 26 are taken from the environment 12 of a user in a time sequence, for example in the form of a video, a respective image acquisition being illustrated by S1 6.
  • the viewing direction in S18 is determined for a respective image acquisition and the user's point of view in the corresponding image 26 is determined based on the determined viewing direction and on the corresponding image acquisition in S20.
  • a timely selection is already preceded by this procedure in this example.
  • this information may also be based on the eye or eye data determined by the eye tracker be recorded.
  • an image acquisition and a corresponding determination of the direction of sight take place only when the eye has been opened or a fixation has been recognized or predicted. If this is not the case, then the check is made as to whether the eye has been opened or a fixation has been recognized or predicted, until this is the case.
  • a first rate to be established for the image acquisition and the sight line determination in S12 in the event that the eye is opened or a certain eye movement, such as the fixation or a saccade, is detected, while, when the eye is closed or, for example, no fixation has been detected or predicted, a second rate is set at S14 in S6 and the line of sight determination at S18 is less than the first rate.
  • These rates relate to the image acquisition rates for image acquisition in S16 and possibly also to the corresponding viewing direction determination in S18. In other words, the image is taken as long as one eye is closed or the eye is not fixed to a specific point or area, less frequently or at a reduced rate.
  • a fixation can also be detected on the basis of an offline fixation detection, which can be used, for example, for an additional or alternative selection of the environmental data in a subsequent evaluation of the data.
  • An additional time selection could optionally also be made in S20 when determining the viewpoint 32 in FIG. 26. For example, if the viewpoint 32 of the user is outside the field of view 18 of the scene camera 14 and thus not in the captured image 32, the image acquisition may be discarded and proceeded to the next image acquisition.
  • biometric characteristics or also external ones may be used Signals are used, which should be illustrated in SO.
  • a biometric signal based on a detected biometric characteristic of the user such as a pulse rate, a pupil contraction, etc.
  • SO it can also be checked whether a signal from an external device has been received and / or whether such an external signal fulfills a criterion.
  • Such signals can then be used, for example, to trigger or start the method and only then to begin the check in S10.
  • Such biometric and / or external signals can also be used additionally or alternatively for checking in S10, so that, for example, a viewing direction determination in S19 and an image recording in S16 only take place when a signal in SO is received or such a signal fulfills a criterion , It is also possible optionally to use such a signal in order to decide when the size determination of the image area according to S22, which will be described in more detail below, is carried out and when and not, for example, immediately proceeding to S24.
  • the size of an image region around the viewpoint 32 is determined in S22 in the course of a spatial selection.
  • the size determination of this image area is carried out dynamically in this example, that is to say as a function of one or more further parameters.
  • a parameter may represent, for example, the viewpoint accuracy, which is determined in S22a.
  • An estimation of the viewpoint accuracy can for example be based on image characteristics or the image quality of the image taken by the eye tracker to determine the line of sight and the point of view, by statistical analysis of several gaze data, for example, their scattering, or even numerous other parameters.
  • the range size is chosen to be larger if the viewpoint accuracy is lower in comparison to a larger viewpoint accuracy.
  • another parameter can be determined or specified in S22b, such as a threshold for the viewpoint accuracy described above, so that, for example, a corresponding resizing of the image area is performed only if the viewpoint accuracy falls below this threshold value and otherwise the size of the image area always the equal to predetermined value, or the setting of the area size in S22c is performed irrespective of the viewpoint accuracy.
  • other parameters may alternatively or additionally be determined in S22b, such as biometric and / or external signals again, so that the setting of the range size in S22c may also be dependent on such signals.
  • an image characteristic of the recorded image 26 can also be used.
  • the area encompassing the viewpoint 32 should be selected such that objects that the user is currently looking at are also completely contained in this area.
  • image characteristics can be determined within the framework of a preliminary analysis or preprocessing of the recorded image 26, in which the image content can be analyzed by analyzing spatial frequencies around the viewpoint 32, the contrast intensity around the viewpoint 32 can be taken into account, the presence of objects, Edges, or the like around the viewpoint 32 around can be considered.
  • S22a, S22b and S22d finally, in S22c, the area size of the image area of the image 26 including the viewpoint 32 is set.
  • An additional temporal selection may optionally be added to this spatial selection.
  • this temporal selection it can now be checked in S24 whether the image content of either entire image 26 or only the image content of the region selected in S22 has changed significantly compared to a previous image acquisition. If this is not the case, according to a first embodiment, all image data relating to the current image acquisition can be discarded and transferred to the next image acquisition. This can be done as long be repeated until a significant change in image content is detected in S24. Only then are the data, at least the image data of the selected image area, transmitted in S28 to the evaluation device 22. According to a further embodiment, however, first and second rates for the transmission of the image data can also be set here again.
  • the image data of the selected image area may be transmitted at a first rate in S25.
  • the image data of at least the selected image area may be transmitted at a second rate lower than the first rate in S26, since in that case the image data is less relevant.
  • predetermined criteria such as whether objects were detected in the selected and evaluated image area, whether predetermined objects in this image area were detected, predetermined significant points or areas in that image area could be recognized, and so on. If this is the case, then the method can be
  • the invention thus provides numerous possibilities, the data according to their relevance both spatially and temporally, in particular in a variety of ways and according to numerous possible criteria can be selected to allow data reduction without loss of relevant image content.
  • the described selection steps can be implemented in any combination, as well as individually, as will be illustrated with reference to FIG. 5 and FIG. 6 based on simplified examples.
  • 5 shows a flowchart for illustrating a method for data acquisition and evaluation of environmental data, according to which in particular only a spatial selection is made, according to a further exemplary embodiment of the invention.
  • image acquisition by means of the scene camera 16 takes place again from the surroundings 12 of a user in S1 6, wherein the eye tracker in S1 8 determines or predicts a viewing direction which corresponds to the time of image acquisition.
  • the user's point of view 32 in the recorded image 26 is again determined in S20.
  • an area size of the area around the viewpoint 32 is determined.
  • the range size may be predetermined, for example, by a fixed parameter, such as a predetermined radius around the viewpoint 32.
  • the image data of the selected area are transmitted to the evaluation device 22 around the viewpoint 32 in S28.
  • the image data outside the range specified in S23 is first compressed in S32 and then transmitted to the evaluation device 22 in S34. The latter then evaluates the transmitted data in S38, optionally in a predetermined sequence, for example, first that of the selected image area and only then that of the non-selected image area.
  • this method can be advantageously selected by a spatial selection of an image area image data in relevant and less relevant data and provide compression of the less relevant data overall data reduction without having to give up relevant information.
  • a further parameter such as a biometric signal or an external signal, as illustrated in S00
  • triggering for example, the image acquisition in S1 and / or the viewpoint determination in S20 and / or depending on which the area size is set in S23.
  • step S10 it is again checked in step S10 whether the user's eye or both eyes of the user are open and / or a fixation of the eye is recognized or at least for the time being. point of the subsequent image acquisition in S16 is predicted. If this is the case, a first rate for image acquisition in S1 6 is determined in S12. This first rate, ie image acquisition rate, is retained, that is to say in particular for the subsequent image recordings, as long as it is still detected in S10 that the eye is opened and / or a fixation is detected.
  • a second image capture rate in S16 is set which is lower than the first rate, since it can then be assumed that the relevant image recordings contain less relevant information and thereby simultaneously Data can be saved.
  • one or more image characteristics are determined in S21 in the course of a preprocessing of the image data, for example by the preprocessing device 23.
  • This image characteristic can relate, for example, to the image content of the recorded image 26, wherein it is checked in the following step S24 whether this image content has changed significantly, for example to a predetermined extent, compared with a previous image acquisition. If this is the case, the image data are transmitted to the evaluation device 22 in S33. If this is not the case, however, less relevant image data can be assumed, and these are then first compressed in S31 and only then transmitted in S33 to the evaluation device 22, which then evaluates the transmitted image data in S38.
  • the viewpoint 32 (2D or 3D) calculated using the eye tracker can also be used here with respect to the scene camera coordinate system in order to spatially and possibly additionally specify a relevant image detail temporally.
  • This image section can then also be displayed alone or with an additional reference detail, e.g. a frame on the outer edge of the scene image 26, a few lines above, below or laterally in the scene image 26, can be read out, transferred or further processed together from the scene camera sensor only.
  • AOI area of interest
  • a preprocessing of a part or of the entire image 26 can also be undertaken in order to precisely determine the extent and type of data reduction. For this purpose, either the whole or a certain subarea of the image 26 would be read out, then completely or partially pre-analyzed and then the characteristic and if necessary, to determine a further data reduction or compression for the further reading, the transmission and / or the final processing.
  • the environmental data finally provided by the temporal and / or spatial selection can be advantageously used to perform efficient registration with previously recorded reference images or videos and on the basis of this registration, which also applies to several users, and thus to several image streams, gaze data streams
  • additional input or trigger data can be performed in parallel or in succession, and based thereon a visual and quantitative aggregation and / or a comparison of eye movements and, if appropriate, other associated events.
  • the reference image may, for example, also represent a reduced or unchanged scene image or be composed of a plurality of reduced scene images or also unchanged scene images.
  • the image data may also be used to provide more efficient registration of visually marked, such as by long fixing or by fixing and speech command or key press, and so on, or contiguously, to reach objects or scene areas with overlays in a transparent head-mounted display that does not have to process and / or search the entire scene video, such as in a content analysis such as OCR (optical character recognition) for signs, for example, to translate only the sign that the user is currently looking at.
  • OCR optical character recognition
  • a user may be given a detailed view, zoomed view, or X-ray view of a viewed object, efficient simultaneous location and mapping (SLAM), and assembly of the scene (s) in the scene, preferably using the angles and constrast depths Fixations allows to create a 3D map of the room, especially over time and possibly multiple users.
  • the relative position of the selected area on the sensor can also be used for this, or it is conceivable that further selected areas on the sensor may be set relative to a current selected area in order to enable a more robust detection.
  • contextual information such as distance, gaze, fixation behavior, and so on
  • these overlays can also be calculated as a function of a deviation of the viewpoint or gaze course from a target object, such as a target sighting point or target course, and then positioned on the basis of target and actual.
  • the device according to the invention and the method according to the invention can thus be used for a multitude of possible applications in which it is clear Improvements are made possible while saving a great deal of time and saving on computation.

Abstract

L'invention concerne un dispositif et un procédé permettant d'acquérir des données relatives à un environnement (12) d'un utilisateur au moyen d'un système de capture d'images de scène (16) et d'évaluer les données d'environnement acquises au moyen d'un système d'évaluation (22). Une sélection spatiale (36a, 36b, 36c) et/ou une sélection temporelle concernant une acquisition des données d'environnement au moyen du système de capture d'images de scène (16) et/ou une transmission des données d'environnement par le système de capture d'images de scène (16) au système d'évaluation (22) et/ou une évaluation des données d'environnement par le système d'évaluation (22) sont effectuées. Par ailleurs, la sélection (36a, 36b, 36c) est effectuée en fonction d'au moins un premier paramètre détecté variable dans le temps (30, 32), afin de choisir ceux-ci selon leur pertinence, et de permettre ainsi une réduction de la quantité de données par des mesures de réduction limitées à des données moins pertinentes.
EP16751273.0A 2015-08-07 2016-08-05 Procédé et dispositif d'acquisition de données et d'évaluation de données d'environnement Ceased EP3332284A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP15180275 2015-08-07
PCT/EP2016/068815 WO2017025483A1 (fr) 2015-08-07 2016-08-05 Procédé et dispositif d'acquisition de données et d'évaluation de données d'environnement

Publications (1)

Publication Number Publication Date
EP3332284A1 true EP3332284A1 (fr) 2018-06-13

Family

ID=53886891

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16751273.0A Ceased EP3332284A1 (fr) 2015-08-07 2016-08-05 Procédé et dispositif d'acquisition de données et d'évaluation de données d'environnement

Country Status (4)

Country Link
US (1) US20200081524A1 (fr)
EP (1) EP3332284A1 (fr)
CN (1) CN108139582A (fr)
WO (1) WO2017025483A1 (fr)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10276210B2 (en) * 2015-11-18 2019-04-30 International Business Machines Corporation Video enhancement
US11284109B2 (en) * 2016-01-29 2022-03-22 Cable Television Laboratories, Inc. Visual coding for sensitivities to light, color and spatial resolution in human visual system
JP6996514B2 (ja) * 2016-10-26 2022-01-17 ソニーグループ株式会社 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム
US11429337B2 (en) 2017-02-08 2022-08-30 Immersive Robotics Pty Ltd Displaying content to users in a multiplayer venue
EP3635952A4 (fr) * 2017-06-05 2020-11-25 Immersive Robotics Pty Ltd Compression d'un flux de contenu numérique
US10511842B2 (en) 2017-10-06 2019-12-17 Qualcomm Incorporated System and method for foveated compression of image frames in a system on a chip
AU2018373495B2 (en) 2017-11-21 2023-01-05 Immersive Robotics Pty Ltd Frequency component selection for image compression
US10893261B2 (en) * 2017-12-06 2021-01-12 Dolby Laboratories Licensing Corporation Positional zero latency
US11695977B2 (en) * 2018-09-28 2023-07-04 Apple Inc. Electronic device content provisioning adjustments based on wireless communication channel bandwidth condition
CN110972202B (zh) 2018-09-28 2023-09-01 苹果公司 基于无线通信信道带宽条件的移动设备内容提供调节
US11808941B2 (en) * 2018-11-30 2023-11-07 Google Llc Augmented image generation using virtual content from wearable heads up display
DE102020105196A1 (de) * 2020-02-27 2021-09-02 Audi Aktiengesellschaft Verfahren zum Betreiben einer Datenbrille in einem Kraftfahrzeug sowie System mit einem Kraftfahrzeug und einer Datenbrille
US11354867B2 (en) 2020-03-04 2022-06-07 Apple Inc. Environment application model
CN113467605B (zh) 2020-03-31 2024-04-02 托比股份公司 用于对可视化数据进行预处理的方法、计算机程序产品和处理电路系统
US11622100B2 (en) * 2021-02-17 2023-04-04 flexxCOACH VR 360-degree virtual-reality system for dynamic events

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100883065B1 (ko) * 2007-08-29 2009-02-10 엘지전자 주식회사 모션 검출에 의한 녹화 제어장치 및 방법
US8576276B2 (en) * 2010-11-18 2013-11-05 Microsoft Corporation Head-mounted display device which provides surround video
EP2499960B1 (fr) * 2011-03-18 2015-04-22 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Procédé pour déterminer au moins un paramètre de deux yeux en définissant des débits de données et dispositif de mesure optique
US9897805B2 (en) * 2013-06-07 2018-02-20 Sony Interactive Entertainment Inc. Image rendering responsive to user actions in head mounted display
EP2642425A1 (fr) * 2012-03-22 2013-09-25 SensoMotoric Instruments GmbH Procédé et dispositif d'évaluation des résultats d'un enregistrement de vue

Also Published As

Publication number Publication date
US20200081524A1 (en) 2020-03-12
WO2017025483A1 (fr) 2017-02-16
CN108139582A (zh) 2018-06-08

Similar Documents

Publication Publication Date Title
WO2017025483A1 (fr) Procédé et dispositif d'acquisition de données et d'évaluation de données d'environnement
DE102007056528B3 (de) Verfahren und Vorrichtung zum Auffinden und Verfolgen von Augenpaaren
EP2828794B1 (fr) Procédé et dispositif d'évaluation des résultats d'un enregistrement de vue
EP2157903B1 (fr) Procede de mesure de perception
DE102004044771B4 (de) Verfahren zur bildbasierten Fahreridentifikation in einem Kraftfahrzeug
EP3394708B1 (fr) Procédé de fonctionnement d'un système de réalité virtuelle et système de réalité virtuelle associé
WO2015117907A2 (fr) Processeur de données adptatif sélectif
WO2017153355A1 (fr) Procédé et dispositif d'exécution de rendu d'un regard
WO2017153354A1 (fr) Procédé et dispositif d'évaluation de représentations d'observation
DE112018005191T5 (de) System und Verfahren zur Verbesserung des Signal-Rausch-Verhältnisses bei der Objektverfolgung unter schlechten Lichtbedingungen
EP3123278B1 (fr) Procédé et système de fonctionnement d'un dispositif d'affichage
DE102016006242B4 (de) Anzeigevorrichtung des kopfgetragenen Typs, Steuerverfahren für Anzeigevorrichtung des kopfgetragenen Typs, Bildverarbeitungssystem und Speichermedium
DE112017003723T5 (de) Verbesserte Steuerung einer robotischen Prothese durch ein kognitives System
DE102009035422B4 (de) Verfahren zur geometrischen Bildtransformation
DE112021005703T5 (de) Informationsverarbeitungseinrichtung und informationsverarbeitungsverfahren
DE102023002197A1 (de) Verfahren und Einrichtung zur Bildanzeige für ein Fahrzeug
DE102019123220A1 (de) Zusammenfassen von Videos von mehreren sich bewegenden Videokameras
DE10046859B4 (de) System zur Blickrichtungsdetektion aus Bilddaten
DE112017007162T5 (de) Gesichtsermittlungsvorrichtung, dazugehöriges Steuerungsverfahren und Programm
DE102020214824A1 (de) Verfahren zum Betreiben eines Visualisierungssystems bei einer chirurgischen Anwendung und Visualisierungssystem für eine chirurgische Anwendung
DE102007001738B4 (de) Verfahren und Computerprogrammprodukt zur Blickerfassung
DE102020204612A1 (de) Verfahren zum Bestimmen eines zeitlichen Temperaturverlaufs eines Körperteils eines Lebewesens, sowie elektronisches Bestimmungssystem
DE102016009142A1 (de) Blickbasierte Aktionssteuerung durch adaptive Erkennung von natürlichen Objektauswahlsequenzen
DE202018006796U1 (de) System zur Vorhersage blickbezogener Parameter
DE102020113972A1 (de) Videoanalyse- und -managementtechniquen für medienerfassung und -aufbewahrung

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180207

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: APPLE INC.

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: APPLE INC.

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20190909

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20231109