EP3789962A1 - Procédé et dispositif de génération des données pour une représentation bi ou tridimensionnelle d'au moins une partie d'un objet et de génération de la représentation bi ou tridimensionnelle de l'au moins une partie d'un objet - Google Patents
Procédé et dispositif de génération des données pour une représentation bi ou tridimensionnelle d'au moins une partie d'un objet et de génération de la représentation bi ou tridimensionnelle de l'au moins une partie d'un objet Download PDFInfo
- Publication number
- EP3789962A1 EP3789962A1 EP20204118.2A EP20204118A EP3789962A1 EP 3789962 A1 EP3789962 A1 EP 3789962A1 EP 20204118 A EP20204118 A EP 20204118A EP 3789962 A1 EP3789962 A1 EP 3789962A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- model
- image
- vertices
- engine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 123
- 230000005540 biological transmission Effects 0.000 claims description 58
- 241000282414 Homo sapiens Species 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 12
- 238000004891 communication Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 7
- 230000002452 interceptive effect Effects 0.000 claims description 6
- 238000004519 manufacturing process Methods 0.000 claims description 5
- 238000004088 simulation Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 230000003993 interaction Effects 0.000 claims description 3
- 238000013459 approach Methods 0.000 description 25
- 230000003068 static effect Effects 0.000 description 18
- 239000011521 glass Substances 0.000 description 16
- 238000003860 storage Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 235000002566 Capsicum Nutrition 0.000 description 5
- 239000006002 Pepper Substances 0.000 description 5
- 241000722363 Piper Species 0.000 description 5
- 235000016761 Piper aduncum Nutrition 0.000 description 5
- 235000017804 Piper guineense Nutrition 0.000 description 5
- 235000008184 Piper nigrum Nutrition 0.000 description 5
- 238000005553 drilling Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007667 floating Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 101100116570 Caenorhabditis elegans cup-2 gene Proteins 0.000 description 1
- 101100116572 Drosophila melanogaster Der-1 gene Proteins 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
Definitions
- the present invention relates to the field of the representation of objects, in particular to the field of the creation of two- or three-dimensional sequences or films.
- Embodiments relate to a method and a device for generating data for a two- or three-dimensional representation of at least part of an object.
- Further exemplary embodiments relate to a method and a system for generating a two- or three-dimensional representation of at least part of an object.
- FIG. 1 Figure 12 shows a schematic representation of a conventional approach to producing, transmitting, and displaying a stereoscopic movie comprising a plurality of frames.
- a cube 100 is shown as the object to be represented.
- a first camera 102 creates a first image 100a of the cube from a first perspective
- a second camera 104 creates a second image 100b of the cube from a second perspective that differs from the first perspective.
- the recordings 100a and 100b of the cube 100 are generated from different angles.
- the individual images 100a, 100b generated and received in this way are provided to a suitable stereoscopic display unit 108, for example a monitor, for display.
- a suitable stereoscopic display unit 108 for example a monitor
- a common 3D camera can also be used, which likewise generates two recordings of the object 100, which are then transmitted to the monitor 108 for display in the manner described above.
- the conventional approach described is disadvantageous, since the amount of data to be transmitted via the transmission medium 106 of at least two images 100a and 100b is very large, which is associated with a correspondingly long data transmission time. Even if the two-dimensional recordings are compressed or images 100a, 100b, the required duration for the compression of the data is long, so that the total transmission time from the point at which the image of the object 100 is generated to the point at which the three-dimensional reproduction is to take place is very high. As an example, a stereoscopic recording of the cube 100 is assumed, and with the aid of FIG Fig.
- the just mentioned transmission of a live stream in three-dimensional quality or the transmission of a three-dimensional live sequence is desirable, for example, in connection with the recording of people and surrounding spaces.
- the recording of people and surrounding spaces using 3D cameras and playing them back as 3D films harbors the above-mentioned problem of immense amounts of data that cannot be transferred on the Internet, which stems from the fact that the data is stored as conventional sequences of 2D images and must be transferred.
- Approaches known in the prior art deal with the coding and transmission of 2D image data from video recordings, but the amount of data and the associated coding time for this two-dimensional solution in connection with 3D image data is too extensive, so that the fundamental problem of the transmission of the required data for the three-dimensional representation of an object, for example as a moving image, remains.
- the present invention is based on the object of creating an improved approach to the three-dimensional representation of an object.
- the approach according to the invention for generating data for a two- or three-dimensional representation is advantageous because, unlike in conventional approaches, the complex transmission of two-dimensional image data is dispensed with. Rather, on the basis of the three-dimensional image data, which represent a 3D image of the part of the object, a 3D model is created which represents at least the part of the object from which the 3D image was obtained.
- This 3D model can be, for example, a grid model or a triangular network, as is known, for example, from the field of CAD technology.
- the model obtained in this way can be described by the position of the vertex values in three-dimensional space, for example in the Cartesian coordinate system by the X, Y and Z values of the vertices.
- the color values can be assigned to the corresponding vertices, and texture information is also transmitted if necessary.
- the amount of data generated in this way is many times less than the amount of data that arises when transmitting a 2D image with a size of 1024 x 768 pixels, so that, due to the small amount of data to represent the object in three-dimensional form, a fast and delay-free Transmission of the data via a transmission medium is made possible, in particular the problems associated with the prior art become conventional prevents large amounts of data from occurring.
- the data generated in this way can either be used to generate a three-dimensional display (e.g. an SD live sequence or a 3D film) or to generate a two-dimensional display (e.g. a 2D live sequence or a 2D film) on a suitable display device for displaying the object or of the part of the object can be used.
- the 3D image comprises the part of the object and a background, the method further comprising extracting the background from the data using the Z value of each vertex, for example by removing a vertex from the data if the Z value of the peak value is outside a predefined range.
- it can additionally be provided to correct the edge region of the object in that depth distances that exceed a predetermined threshold value are filtered out.
- This procedure is advantageous because it allows the recorded object to be displayed in a simple manner without the background that has also been recorded, and so only the data for the actual object are generated, but not the background data that may not be required at all further reduction in the amount of data is achieved. Furthermore, this procedure enables the object to be displayed three-dimensionally by the data generated to be displayed on the receiving side in a different context, for example against a different background.
- the 3D model is generated using at least a first 3D image and a second 3D image of the object from different positions in each case, the first and second 3D images at least partially overlapping.
- the different positions can be selected such that an area of the object that is not visible in the first 3D image of the object is visible in the second 3D image of the object.
- it can be provided to generate a first 3D model using the first 3D image and a second 3D model using the second 3D image, and to combine the first and second 3D models into a common 3D model , the data being provided using the shared 3D model.
- combining the first and second 3D models into a common 3D model can include the following: arranging the first and second SD models such that their overlapping areas are congruent, identifying the vertices from the first 3D model and from the second 3D model that is in a Plane lie within a predefined distance, and merging the identified vertices into a new vertex in the common 3D model.
- the identification and combination is preferably repeated for a plurality of planes, the number and the spacing of the plurality of planes being selected such that the part of the object is represented by the common 3D model.
- a plurality of 3D recordings of the object from different, at least partially overlapping positions is used in order to generate the 3D model, so that the 3D model comprises several sections of the object or even the complete object.
- This embodiment is advantageous because it ensures that all areas of the part of the object that is to be represented three-dimensionally are described by corresponding data of the 3D model, so that in particular sections of the object not shown in a representation are based on the second representation from a different perspective or, from a different perspective can be recognized and merged.
- this enables a 3D model of the entire object to be generated, which is then described in accordance with the vertices and the color values of the SD model, so that when the recordings are retrieved in a simple manner and way at the receiving point it is possible to view the object from any definable perspective, in particular a view without faulty or missing points.
- the provision of the data includes reducing the amount of data without data loss by determining the spatial spacings of the peak values, and correcting the spatial spacings as consequential differences on the basis of a predetermined starting point.
- the coding can begin at a lower point of the object and continue in a spiral to an upper point of the object.
- This procedure is advantageous because it enables the amount of data to be reduced again without data loss occurring, since starting from the starting point, which is completely encoded, only the difference values to the positions of the neighboring vertices need to be defined, which results in the further reduction mentioned the amount of data.
- the generation of the 3D model includes the provision of the 3D image of the object or the part of the object by a 3D camera or the provision of a stereoscopic image of the object or the part of the object.
- This embodiment is advantageous because known approaches for generating 3D recordings or stereoscopic recordings can be used, which then serve as input for the process according to the invention, which is carried out, for example, by a 3D engine, in order to create the 3D model capture and encode accordingly.
- the object is a person, an animal, an object or a background.
- This procedure is advantageous because the approach according to the invention is not subject to any restrictions with regard to the object to be displayed, since a significantly reduced data set for describing the same in three-dimensional form can be obtained from the 3D images of the object in the manner according to the invention.
- the object was a background
- this is advantageous, since this makes it possible to provide a desired background and to provide it at a remote position for display in a three-dimensional configuration, in particular enables the display of the entire background, for example a room, which is recorded by several cameras in a three-dimensional manner, the creation of data for displaying the background on the basis of the 3D model, which are transmitted with a reduced amount of data and which make it possible to generate the background, for example the room, at the receiving point that a viewer at the receiving location is able to perceive the background from any position / perspective.
- the generation of the 3D model and the provision of the data are repeated at a predetermined repetition rate in order to generate a multiplicity of chronologically successive frames which each contain the provided data and can be displayed as a 3D sequence.
- the repetition rate is preferably selected such that up to 30 frames are generated in one second.
- This procedure is advantageous because it opens up the possibility of generating 3D sequences or 3D films which, due to the small amount of data in each individual frame, can be generated from the point at which the data is generated without problems with regard to the transmission time, transmission bandwidth and data volume , can be transmitted to a receiving point at which the data are to be displayed.
- This procedure allows for the first time, unlike in the prior art, the reliable and fast generation of data for displaying a 3D film or a 3D sequence that are suitable for transmission over a transmission medium with limited bandwidth, for example the Internet.
- the object is at least the face of a person
- the method according to the invention comprises the following for each frame: providing a static face model of an original face of another person, determining a position of the person's face in space when generating the 3D image, superimposing the 3D model of the person's face with the static face model of the other person, adapting the 3D model of the person's face in those places where there is no movement to the static face model of the other person, creating a texture from the 3D image of the face of the person, which is transparent at the respective points where there is movement, in order to generate a shadow mask texture, and semitransparent texturing of the shadow mask texture onto the adapted 3D model of the face of the person, so that a resulting 3D sequence a moving and animated representation of the Ori to the human eye initial face shows.
- This procedure is advantageous because it makes it possible in a simple manner to assign the facial contours of the other person to a person who is similar in terms of physique and stature to a known person, so that the data provided, which is based on the procedure according to the invention, is a have a small amount of data, can be processed at the receiving point in such a way that the viewer there gains the impression that the other person is shown, which is particularly advantageous in the entertainment industry and similar areas.
- This procedure is advantageous because, due to the approach according to the invention, the data generated for the three-dimensional representation have only a small amount of data and are thus transmitted in a simple manner.
- the data received in this way which display the 3D model, make it possible, by using a 3D engine, to generate a corresponding 3D image of the part of the object, which is then conventionally used for three-dimensional representation on a display device, for example a stereoscopic monitor, can be used.
- This can be done, for example, by generating a 3D recording by the 3D engine in order to display or project the object stereoscopically.
- the 3-D engine preferably generates up to 30 3-D recordings per second, which is advantageous because it allows moving images, that is to say 3-D films or 3-D sequences, to be generated from the received data.
- the method comprises displaying the 3D recording by a display device, for example an autostereoscopic 3D monitor or by means of a battery of powerful projectors, it being possible for the display device to be generated using the stereoscopic 3D Pepper's Ghost method of holograms works.
- the method can include projecting the 3D recordings generated by the 3D engine onto a glass pane, which includes a lenticular lens or a suitable 3D structure, so that a 3D hologram is available to the human eye within an area in front of the glass pane arises.
- This procedure is advantageous because it creates the possibility of using conventional display devices to generate 3D representations of objects that work on the basis of input data, as is also used in the prior art, but based on the data generated according to the invention can be generated from the 3D model.
- the transmission of the data can comprise a transmission over the Internet or an intranet, for example through a client-server relationship, for example using the TCP-IP, the UDP or the server-side protocol.
- a client-server relationship for example using the TCP-IP, the UDP or the server-side protocol.
- local storage of the data as a file can also be provided.
- This procedure represents a particular advantage of the approach according to the invention, since a 3D model of the object or a part of the object is generated on the receiver side, which is transmitted to the receiver side in the manner described above, with a reduced amount of data.
- This enables the original 3D model to be recovered on the receiver side, based on a corresponding 3D engine, so that the entire 3D representation of the object is available on the receiver side. If the entire object has now been captured and processed as a 3D model, then the possibility is created at the receiving end that, for example, a user selects a perspective with which he would like to look at the corresponding object, and according to the selected perspective, those for the corresponding Representation of the required 3D recordings generated from the 3D model on the recipient side.
- the receiver side already has the 3D model of the object or the part of the object, so that the receiver side can determine from which perspective this 3D model is to be viewed, so that the corresponding 3D recordings for the display can be generated on the receiver-side monitor without the need for a new recording and thus a new transmission of the data or a return channel to the transmitter.
- the present invention also provides a computer program with instructions for carrying out the method according to the invention when the instructions are carried out by a computer, whereby the advantages mentioned above are likewise achieved in a computer implementation of the approach according to the invention.
- the 3D engine is configured to extract a background in the 3D image from the data using the Z value of each vertex, preferably an edge area of the part of the object by filtering out depth distances that exceed a predetermined threshold value, Getting corrected.
- the input is configured to receive at least a first SD image and a second 3D image of the object from different positions, the first and second 3D images at least partially overlapping.
- the 3D engine is configured to generate the 3D model using the first 3D image and the second 3D image of the object.
- the 3D engine preferably generates a first 3D model using the first 3D image and a second 3D model using the second 3D image and a common 3D model using the first and second 3D model, the data can be generated using the shared 3D model.
- the 3-D engine reduces the amount of data without data loss by determining the spatial distances between the peak values and encoding the spatial distances as a spindle-like sequence from a specified starting point to an end point.
- the device comprises a 3D camera or a stereoscopic recording device for generating the 3D recording of the object, the 3D camera or the stereoscopic recording device being connected to the input.
- the 3D engine is configured to generate the 3D model and the data with a specific repetition rate in order to generate a multiplicity of chronologically successive frames that each contain the provided data and can be displayed as a 3D sequence.
- the object is at least the face of a person
- the 3D engine is configured to determine a position of the face of the person in space for each frame when the 3D image is generated in order to determine the 3D model of the face of the person with a static face model of an original face of another person in order to adapt the 3D model of the face of the person in those places where there is no movement to the static face model of the other person in order to create a texture from the 3D image of the face the person, who is transparent at those points where there is movement, to create a shadow mask texture, and to texturize the shadow mask texture on the adapted 3D model of the person's face in a semi-transparent manner, so that a resulting 3D sequence for the human eye shows a moving and animated representation of the original face.
- the system comprises a display device, for example in the form of an autostereoscopic 3D monitor or in the form of a battery of powerful projectors, which is connected to the 3D engine.
- the display device preferably works using the stereoscopic 3D Pepper's Ghost method for generating holograms, and is configured to project the 3D recordings generated by the 3D engine onto a pane of glass that includes a lenticular lens or a suitable 3D structure, so that a 3D hologram is created for the human eye within an area in front of the glass pane.
- the system's 3D engine is configured to receive a selection of a perspective from which the object is to be viewed and to view the object from the selected perspective based on the received data that describes the 3D model of the object, so that no return channel to the point at which the 3D image of the object is generated is required.
- the system according to the invention has the advantages described in more detail above in connection with the method.
- Embodiments of the present invention thus create the possibility for interactive real-time 3D graphics.
- the problem existing in the prior art with regard to the enormous amount of data relating to the representation of 3D objects is addressed according to embodiments with the aid of a 3D engine on which for example, so-called real-time software for displaying computer graphics is running.
- the spatial component of the 3D model is used as a deterministic sequence of spatial distances, so that the amount of data is noticeably reduced while maintaining the same quality.
- the resulting amount of data as a result of spatial distances is orders of magnitude smaller than the amount of data arising in the prior art for the transmission of 3D image data as a sequence of 2D images.
- the spatial coding of distance values can be carried out faster than the compression of 2D image data, which makes it possible to carry out a live transmission of 3D image data in real time, using a suitable 3D engine at the receiver in three dimensions with interactive change of viewpoint or as a stereoscopic film as several 2D images per frame.
- the Fig. 2 shows a schematic representation of the procedure according to the invention for producing, transmitting and displaying a stereoscopic film with individual models generated from a 3D engine. Similar to in Fig. 1 is also in Fig. 2 the three-dimensional image of a cube 100 was chosen as the basis for explaining the approach according to the invention.
- Fig. 2 shows a schematic overall representation of the system 200 according to embodiments of the present invention for generating a three-dimensional representation of at least part of an object, namely the cube 100.
- the system 100 already includes this on the basis of FIG Fig. 1 transmission medium 106 described, which is arranged between a transmitter side 202 and a receiver side 204. Items already based on the Fig. 1 are described in the Fig. 2 provided with the same reference numerals and will not be explained again in detail.
- the transmitter side 202 shown may, for example, comprise a device for generating data for a three-dimensional representation of at least part of an object, the device according to the exemplary embodiment shown comprising a 3D engine 206, which is shown in FIG Fig. 2 is shown schematically, and which receives the SD recording comprising the two images 100a and 100b of the cube 100 as input. Based on the 3D recording, the 3D engine generates a model 208 of the cube 100 comprising the vertices A to H arranged at different positions in space. Depending on the different perspectives from which the cube 100 is recorded, this includes by the 3D engine 206 generated the 3D model either the entire object, i.e.
- the 3-D engine 206 operates to define the 3-D model 208 of the cube 100 in terms of vertices A through H and as color values associated with the respective vertices.
- the scene recorded by the camera 102, 104, which includes the cube 100 delivers a single image comprising color values and depth information at the output of the 3D engine 206.
- the color values each consist of one byte for red, one byte for green and one byte for blue (RGB color values), that is to say of 3 bytes in total.
- the depth information includes the X, Y, and Z values for the vertices of the 3D model in three-dimensional space.
- the X, Y and Z values can each be stored as floating point numbers with single precision, with a length of, for example, 32 bits.
- the sequence of X, Y, and Z values is called the vertices, and the set of all vertices of the 3-D model 208 is called the point cloud.
- the example shown there of the generation of data for the three-dimensional representation of a cube 100 does not represent a restriction of the approach according to the invention; in fact, the object 100 can be any object with any complex structure, for example also a representation of a person or a machine.
- the 3D engines shown can be implemented, for example, by computer systems which, if necessary, are appropriately equipped in terms of hardware for generating 3D models and on which appropriate software is provided for execution.
- the device 202 is designed to repeatedly generate a 3D image 100a, 100b of the object 100 in order to provide data for a plurality of successive frames for transmission via the medium 106, so as to provide a 3D live sequence on the receiver side 204 or to represent a 3D film.
- up to 30 frames are generated per second by the device 202, ie up to 30 individual images of the object 100 are recorded.
- Each individual image is encoded using the 3D model using the 3D engine 206 as described above, so that each of the 30 frames per second contains a data set that contains the vertices and the color values of the object 100 assigned to the vertices at the time of recording .
- the device 202 and the method implemented thereby is advantageous because the amount of data that is transmitted via the transmission medium 106 is significantly reduced, which also significantly reduces the transmission time.
- the time to calculate the 3D models is shorter than the time required to compress individual images according to the prior art.
- the complete transmission of the 3D information via the 3D model enables the recipient to freely choose a point of view of the object on the recipient side, as the actual images after the data of the 3D model have been transmitted on the recipient side by a 3D engine at runtime can be generated and displayed.
- the data (the data set) that describe the 3D model 208, which was generated at the transmitter end are transmitted via the transmission medium 106 to the receiver end 204, so that the corresponding data that describe the 3D model 208 ′ , on the receiving end 204.
- These data are fed to a 3D engine 210 which, on the basis of the 3D model, generates the 3D image of the object in the corresponding frame, for example the two images 100a, 100b, which then, as in the prior art, is transferred to a suitable one Monitor 108 or another suitable display device for three-dimensional representation of the object on receiver side 204 are provided.
- an approach is thus taught in which a sequence of individual 3D models is transmitted instead of a sequence of individual 2D images, as is the case in the prior art.
- the 3D models 208 are generated with the aid of the 3D engine 206 before the transmission, the 3D engine recognizing edges from the images 100a, 100b, for example, and generating the 3D model based on the recognized edges.
- the 3D engine 206 can determine common areas in the images 100a, 100b, e.g. common edges that belong to the object 100, in order to determine the resulting 3D model or 3D grid (mesh) of the object 100 therefrom.
- the 3D model described by the vertices and color values is converted back into the two images 100a and 100b by the receiver-side 3D engine 210 in order to display the 3D object from different angles with the individual images of a and a, which then can be displayed on the stereoscopic output device 108.
- the object is to include a person who located within a scene.
- the scene includes the person who stands in a room and, for example, moves slightly to and fro in his place.
- the device 202 (see FIG Fig. 2 ) Records 30 frames / frame of this scene per second, and generates a corresponding 3D model of the scene for each frame and describes it through the vertices and color values.
- the data thus generated for each frame comprise, as mentioned, the color values and the depth information, for example RGB values and XY and Z values, which each define a vertex, the majority of the vertices forming a point cloud.
- the background may be desirable to extract the background from the scene, for example if only the representation of the person is sent to the receiving end 204 (see FIG Fig. 2 ) is to be transmitted and is to be displayed there, for example, with a different background that is either transmitted in advance or in parallel from the transmitter to the receiver or that is specified on the receiver.
- each individual vertex is compared with a corridor distance (threshold distance) of the standing area of the person to be depicted, and distances that indicate that a vertex is further away or closer than the corridor distance is recognized as a background vertex and omitted, i.e. only those vertices are used (Vertices) are permitted that lie within the corridor distance, and the data obtained in this way are those that depict the person in a point cloud. In other words, the number of vertices which define the point cloud per individual image is reduced by those which are clearly assigned to the background.
- Fig. 3 shows schematically this procedure, where Fig. 3 (a) the initially recorded scene 300 shows in which a person 302 (object) is shown standing on a floor 304. A rear wall 306 is also shown. The area 308, which is delimited by the dashed lines 308a, 308b running in the X direction, is shown along the Z coordinate. The area 308 is the above-mentioned threshold value corridor, and according to exemplary embodiments, peak values of a 3D model that was generated on the basis of the 3D recording of the scene 300 are deleted, provided they lie outside the threshold value corridor 308, so that the in Fig. 3 (b) The representation shown results in which only the person 302 and part of the floor 304 'remain.
- the threshold value corridor is also restricted in the X direction, as indicated by the dashed lines 308c and 308d in FIG Fig. 3 (a) is shown. Peak values with X values outside the area 308 'are deleted from the data which describe the 3D model based on the scene 300, so that the remaining floor area 304' can be reduced even further.
- the data model of the person 302 generated by the 3D engine can be, for example, a grid model or a triangular network, depending on the 3D engine used.
- the edge region of the 3D model can be smoothed by a smoothing algorithm, for example by a smoothing algorithm that filters out large depth values or depth distances.
- an object 100, 302 can be recorded from multiple perspectives.
- One reason for the multiple recordings can be that the object has to be mapped completely so that a complete 3D model is available.
- a situation can arise that due to the design of the object, for example due to a part of the body being covered of a person by a hand of the person or by a protruding section of an object behind the sections through which a 3D image is not recorded. This creates so-called holes in the 3D image, which can be seen as black spots in the 3D model, for example in the triangular grid.
- these holes are created because the distance between the infrared sensor and the imaging camera is a few centimeters, so that the pyramids of both sensors do not completely overlap.
- perspective concealments for example a person's hand in front of their body, areas are created that do not have a triangular network or a section of the 3D model as a basis, or areas that have no image texture.
- this problem is solved in that at least two 3D cameras are used, with other exemplary embodiments also using more than two 3D cameras which are arranged at a distance from one another so that the 3D recordings generated thereby at least partially overlap
- This enables the areas of a first 3D image in which one of the above-mentioned holes is located to be covered by the 3D image of the additional camera (s).
- a triangular network is created from the vertex values, that is to say from the various point clouds of the various 3D recordings, by triangulation and the recorded images are projected onto this triangular network. Triangulation can be performed, for example, using the Delauny method or using an elevation field. If you place the two triangular networks on top of each other, you can no longer see any black areas without 3D information or color information.
- the textured 3D models or triangular networks of different, overlapping views of the person obtained in the manner described above are subsequently, according to exemplary embodiments, combined to form a 360 ° all-round view of the person.
- two overlapping triangular networks are brought to overlap in overlapping areas and, beginning with a predetermined plane, for example a horizontal plane (X-Z plane), those vertices are identified which are at a predetermined distance from one another.
- the amount of data is set depending on the selection of the distance, and the distance can be varied depending on the circumstances. For example, depending on the later transmission of the data, if the transmission medium is known, the amount of data can be adapted to a bandwidth and the distance can be determined accordingly.
- the identified points are summarized in a new triangular network, and if a point set, for example a circular point set on a plane, has been found, the process goes to the next higher level, which is repeated until the relevant point cloud or the relevant vertices for the outer shell of the object are found. For example, all resulting points can be displayed from bottom to top in a connected spindle. As a result, a textured, connected point cloud of the outer shell of the object is obtained as a frame, in short, a plurality of X, Y, Z values with an order.
- Fig. 4 shows a flow diagram of an embodiment of the method according to the present invention for generating data for a three-dimensional representation at least part of an object, as explained above.
- a 3D model of the part of the object is generated using a 3D image of the part of the object.
- data are provided using the 3D model which describe the vertices of the part of the object in three-dimensional space and the color values assigned to the vertices.
- the method comprises, as an optional step, extracting S104 the background from the data using the Z-value of each vertex, as explained above, and as a further optional step, correcting S106 of an edge region of the object by filtering out depth distances that meet a predetermined threshold value exceed as explained above.
- the correction of the edge area can include anti-aliasing and the avoidance of spikes that contain large depth values with a steep rise.
- steps S100 to S106 are used to generate a 3D live sequence or a 3D film for each frame of the sequence / of the movie, each frame being a frame of the sequence / movie.
- the repetition rate is 30 frames per second.
- texture information can be interpolated between the vertex values (vertex values) and thus require little data width.
- Fig. 5 shows a flowchart with further steps according to exemplary embodiments of the invention, in which the 3D model is generated using at least a first 3D image and a second 3D image of the object from different, at least partially overlapping positions.
- a first 3D model is generated using the first 3D image and a second 3D model is generated using the second 3D image.
- the first 3D model and the second 3D model are combined in order to generate a common 3D model, whereby the data that were created in step S102 (see FIG Fig. 4 ) are provided, generated and provided using the shared 3D model.
- the connection of the two 3D models includes step S110a, in which the two 3D models are arranged in such a way that their overlapping areas are in congruence.
- step S110b the vertices that lie within a predefined distance are defined, and the identified vertices are combined in step S110c.
- Steps S110a to S110c are repeated if it is determined in step S112 that not all of the predetermined planes relating to the 3D model have been processed. In this case, a further level is selected in step S114, and the method returns to step S110b. Otherwise, If it is determined in step S112 that all levels have been processed, the method ends in step 116.
- the object can also be the background of a room which is to be displayed on the receiver side in such a way that a user on the receiver side can view the room from different perspectives and can also move within the room within predetermined limits.
- the 3-D recording includes a 3-D recording of the background of a room, for example in accordance with steps S100 to S116, but without step S104, since it is of course not sensible to remove the background at this point.
- the steps in block S110 connect the various recordings of the interior space in order to produce the interior shell of the space.
- an area for example a circular area, can be defined in which a user can "move freely” in order to create the illusion of a live 3D film receive.
- a further exemplary embodiment of the approach according to the invention is explained in more detail below, in which a possibility is opened up of changing a human face.
- Such approaches are about modifying a recording of a person who looks similar to another, for example known personality, for example in the area of the face to such an extent that the similarity becomes even greater.
- a change in the 3D model or the triangular network is effected at a high frequency, and the resulting animation consists, as in a film, of a sequence of completely new triangular networks that are displayed one after the other to match the human To give the viewer the appearance of a moving picture.
- this hologram illusion arises for a person who can see with two eyes. According to the present invention, this hologram illusion can be changed at runtime.
- the Fig. 6 shows a flowchart which illustrates the exemplary embodiment of the method according to the invention for changing a human face.
- the method is based on an object which represents the face of a person, and in a step S118 a static face model of an original face of another person is first provided.
- the first position can, for example, be a so-called look-a-like person who looks similar to the other person, for example a known person.
- the face model of the other person's original face is a static 3D model with texture that was created, for example, from a photo or film recording of the other person and can therefore have a correspondingly high recognition effect.
- a position of the person's face in space is determined when the 3D image is generated.
- a position determination system e.g. Nexonar, a device equipped with a sonar sensor that is worn on the back of the head, or raycasting can be used.
- the 3D model of the person's face is superimposed on the static face model of the other person and in step S124 the 3D model of the person's face is adapted to those places where there is no movement , to the static face model of the other person.
- the difference between the two 3D models or triangular grids can be added at those points where there is no movement, e.g. in the area of the nose, cheeks and the like, so that a common 3D model or 3D mesh results , which is updated at runtime due to the fact that the steps just described are carried out for each frame / individual image of the 3D film.
- step S126 a texture is created from the 3D image of the person's face, specifically at those points where there is movement, in order to generate a shadow mask texture, which is textured semi-transparently on the shared or new 3D model in step S128 in order to obtain a 3D model at runtime that is recognizable to the human eye as a sequence of a moving and animated representation of the original face.
- Fig. 7 Based on Fig. 7 is an exemplary embodiment of the method according to the invention for generating a three-dimensional representation of at least one part an object is explained in more detail, for example, by using the Fig. 2 system described is carried out.
- a first step S130 data for the three-dimensional representation of the at least one part of the object are generated, specifically in accordance with the method described above, as it is, for example, with the aid of FIG Fig. 4 , 5 and 6th has been explained or how it was based on the recipient side 202 in Fig. 2 was explained.
- step S132 the data are transmitted via the transmission medium 106 from the sender side 202 to the receiver side 204 (see FIG Fig.
- step S134 the in Fig. 2 3D recordings 100a and 100b shown, for example generated by the 3D engine 210 on the receiver side 204.
- step S136 the generation of the 3D recording is carried out by a 3D engine in order to display or project the object stereoscopically.
- the 3D image is displayed by a display device, for example an autostereoscopic 3D monitor 108 or a battery of powerful projectors.
- the data generated in step S130 can again be suitably reduced by quantization, but accepting a loss of data.
- the amount of data can be binary coded and further reduced, e.g., by run length coding and similar approaches known in the art.
- the transmission in step S132 can take place via the Internet or an intranet, for example through a client-server relationship using the TCP-IP protocol, the UDP protocol or the server-side protocol.
- the transmission S132 can also lead to a local storage of the received individual images / frames as a local file.
- step S134 before the data is provided to the 3D engine 210, the data can be unpacked in accordance with the coding of the same before transmission and buffered, the buffering being provided to ensure that after an initial, desired filling state has been reached continuous processing of the data packets is possible, even if they are different or varying Data rates at which the corresponding data packets are received at the receiver.
- step S138 can include a display using the stereoscopic 3D Pepper's Ghost method for generating holograms, as shown in step S140, in which an autostereoscopic 3D monitor or a battery of powerful projectors is provided, suitable 3D images, for example the images 100a, 100b generated by the 3D engine 210 (see Fig. 2 ) to project onto a glass pane, which comprises a lenticular lens or a suitable 3D structure, so that a 3D hologram is created for the human eye in a predetermined viewing direction in front of the glass pane.
- suitable 3D images for example the images 100a, 100b generated by the 3D engine 210 (see Fig. 2 ) to project onto a glass pane, which comprises a lenticular lens or a suitable 3D structure, so that a 3D hologram is created for the human eye in a predetermined viewing direction in front of the glass pane.
- the present invention also finds application in surveillance for the display and transmission of changing content.
- the monitoring, detection and transmission of changing content is particularly important.
- differences between a static 3D model and a recorded live image are generated within certain limits (threshold values) in order to detect changes more quickly and more precisely than in a 2D video image.
- threshold values threshold values
- a static 3D model of the drilling rig is compared several times per second with a 3D image from a perspective of the drilling rig, with the drilling rig being animated during runtime using the 3D engine, for example can.
- Changes in the live 3D model such as a person entering a recording area, are compared with the static 3D model and can trigger alarms.
- Exemplary embodiments have been described in connection with the 3D representation of an object or a part of the object.
- the approach according to the invention can also be used with a two-dimensional representation of the object or part of the object, for example by processing the data generated according to the invention that reproduce the 3D model on the receiver side only as 2D images or as a 2D image sequence and be brought to the display.
- embodiments of the invention can be implemented in hardware or in software.
- the implementation can be carried out using a digital storage medium such as a floppy disk, a DVD, a Blu-ray disk, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, a hard disk or other magnetic memory or optical memory are carried out on the electronically readable control signals are stored, which can interact with a programmable computer system or cooperate in such a way that the respective method is carried out. Therefore, the digital storage medium can be computer readable.
- Some embodiments according to the invention thus include a data carrier that has electronically readable control signals that are capable of to interact with a programmable computer system in such a way that one of the methods described herein is carried out.
- embodiments of the present invention can be implemented as a computer program product with a program code, the program code being effective to carry out one of the methods when the computer program product runs on a computer.
- the program code can, for example, also be stored on a machine-readable carrier.
- exemplary embodiments include the computer program for performing one of the methods described herein, the computer program being stored on a machine-readable carrier.
- an embodiment of the method according to the invention is a computer program that has a program code for performing one of the methods described herein when the computer program runs on a computer.
- a further embodiment of the method according to the invention is thus a data carrier (or a digital storage medium or a computer-readable medium) on which the computer program for performing one of the methods described herein is recorded.
- a further exemplary embodiment of the method according to the invention is thus a data stream or a sequence of signals which represents or represents the computer program for performing one of the methods described herein.
- the data stream or the sequence of signals can, for example, be configured to be transferred via a data communication connection, for example via the Internet.
- Another exemplary embodiment comprises a processing device, for example a computer or a programmable logic component, which is configured or adapted to carry out one of the methods described herein.
- a processing device for example a computer or a programmable logic component, which is configured or adapted to carry out one of the methods described herein.
- Another exemplary embodiment comprises a computer on which the computer program for performing one of the methods described herein is installed.
- a programmable logic component for example a field-programmable gate array, an FPGA
- a field-programmable gate array can interact with a microprocessor in order to carry out one of the methods described herein.
- the methods are performed by any hardware device. This can be hardware that can be used universally, such as a computer processor (CPU), or hardware specific to the method, such as an ASIC, for example.
- a second exemplary embodiment is a method for generating data for a two- or three-dimensional representation of at least part of an object (100), with:
- a third embodiment is the method according to the second embodiment, in which the repetition rate is selected such that up to 30 frames are generated in one second.
- a 6th exemplary embodiment is the method according to one of the 1st to 5th exemplary embodiments, in which the 3D image (100a, 100b) comprises the part of the object (100) and a background (306), and in which the method comprises the following : Extract (S104) the background (306) from the data using the z-value of each vertex.
- a seventh embodiment is the method of claim 6, wherein extracting the background (306) comprises removing a vertex from the data if the z-value of the vertex is outside a predefined range.
- An 8th embodiment is the method according to the 6th embodiment or 7th embodiment, with: Correcting (S106) an edge region of the part of the object (100) by filtering out depth distances which exceed a predetermined threshold value.
- a ninth embodiment is the method according to the first embodiment, in which the different positions are selected in such a way that a region of the object (100) that is not visible in the first 3D image of the object (100) is in the second 3D recording of the object (100) is visible.
- An 11th exemplary embodiment is the method according to the 10th exemplary embodiment, in which the identification (S110b) and summarizing (S110c) is repeated (S112) for a plurality of levels, the number and the spacing of the plurality of levels being selected such that that the part of the object (100) is represented by the common 3D model (208).
- a 12th exemplary embodiment is the method according to one of the 1st to 11th exemplary embodiments, in which the 3D model (208) using a plurality of 3D recordings (100a, 100b) of the object (100) from different, at least partially overlapping positions is generated, so that the 3D model (208) comprises several sections of the object (100) or one of the entire object (100).
- a 14th exemplary embodiment is the method according to the 13th exemplary embodiment, in which the coding is carried out in the manner of a spindle from a lower point on the object (100) to an upper point on the object (100).
- a 15th exemplary embodiment is the method according to one of the 1st to 14th exemplary embodiments, in which the generation (S100) of the 3D model (208) is the provision of the 3D image (100a, 100b) of the object (100) by means of a 3D -Camera (102, 104) or the provision of a stereoscopic recording of the object (100).
- the generation (S100) of the 3D model (208) is the provision of the 3D image (100a, 100b) of the object (100) by means of a 3D -Camera (102, 104) or the provision of a stereoscopic recording of the object (100).
- a 16th exemplary embodiment is the method according to one of the 1st to 15th exemplary embodiments, in which the object (100) comprises a person, an object or a background.
- An 18th exemplary embodiment is the method according to the 17th exemplary embodiment, in which the generation of the 3D image (100a, 100b) of the part of the object (100) comprises the following: Generating (S136) a 3D recording (100a, 100b) by a 3D engine in order to display or project the object (100) stereoscopically.
- a 19th exemplary embodiment is the method according to the 18th exemplary embodiment, in which the 3D engine generates up to 30 3D images (100a, 100b) per second.
- a 20th exemplary embodiment is the method according to one of the 17th to 19th exemplary embodiments, with: Displaying (S138) the 3D recording (100a, 100b) by a display device (108), for example an autostereoscopic 3D monitor or a battery of powerful projectors.
- a display device for example an autostereoscopic 3D monitor or a battery of powerful projectors.
- a 21st exemplary embodiment is the method according to the 20th exemplary embodiment, in which the display device (108) operates using the stereoscopic 3D Pepper's ghost method for generating holograms, the method comprising the following: Projecting (S140) the 3D recordings (100a, 100b) generated by the 3D engine onto a glass pane, which includes a lenticular lens or a suitable 3D structure, so that a 3D hologram is available to the human eye within an area in front of the glass pane arises.
- a 22nd exemplary embodiment is the method according to one of the 17th to 21st exemplary embodiments, in which the transmission of the data involves a transmission over the Internet or an intranet, for example through a client-server relationship, for example using the TCP-IP, the UDP or the server side protocol, and / or local storage as a file.
- a 25th exemplary embodiment is a computer program with instructions for carrying out the method according to one of the 1st to 24th exemplary embodiments, when the instructions are carried out by a computer.
- a 28th embodiment is the device (202) according to the 27th embodiment, in which the object (100) is at least the face of a person, and in which the 3D engine (206) is configured to for each frame to determine a position of the person's face in space when generating the 3D image (100a, 100b); overlaying the 3-D model (208) of the person's face with a static face model of an original face of another person; adapting the 3-D model (208) of the person's face to the static face model of the other person at those locations where there is no movement; to create a texture from the 3D image (100a, 100b) of the face of the person, which is transparent at those points where there is movement, in order to generate a shadow mask texture; and texturing the shadow mask texture on the adapted 3D model of the person's face in a semi-transparent manner, so that a resulting 3D sequence shows a moving and animated representation of the original face for the human eye.
- a 29th exemplary embodiment is the device (202) according to one of the 26th to 28th exemplary embodiments, in which the 3D engine (206) is configured to create a background (306) in the 3D image (100a, 100b) from the Extract data using the z-value of each vertex.
- a 30th exemplary embodiment is the device (202) according to the 29th exemplary embodiment, in which the 3D engine (206) is configured to cover an edge region of the part of the object (100) by filtering out depth distances that exceed a predetermined threshold value correct.
- a 31st exemplary embodiment is the device (202) according to one of the 26th to 30th exemplary embodiments, in which the 3D engine (206) is configured to reduce the amount of data without data loss determine the spatial spacing of the peaks; to encode the spatial distances as consequential differences starting from a given starting point to an end point like a spindle.
- a 32nd exemplary embodiment is the device (202) according to one of the 26th to 31st exemplary embodiments, with: a 3D camera (102, 104) or a stereoscopic recording device (108) for generating the 3D recording (100a, 100b) of the object (100), which is connected to the input.
- a 34th exemplary embodiment is the system (200) according to the 33rd exemplary embodiment, with a display device (108), e.g. an autostereoscopic 3D monitor or a battery of powerful projectors, which is connected to the 3D engine (210).
- a display device e.g. an autostereoscopic 3D monitor or a battery of powerful projectors, which is connected to the 3D engine (210).
- a 35th exemplary embodiment is the system (200) according to the 34th exemplary embodiment, in which the display device (108) operates using the stereoscopic 3D Pepper's Ghost method for generating holograms, and is configured to display the data from the 3D Engine (210) to project the 3D recordings (100a, 100b) generated onto a glass pane, which comprises a lenticular lens or a suitable 3D structure, so that a 3D hologram is created for the human eye within an area in front of the glass pane.
- a 36th exemplary embodiment is the system (200) according to one of the 33rd to 35th exemplary embodiments, in which the 3D engine (210) is configured to receive a selection of a perspective from which to view the object (100); and to represent the object (100) from the selected perspective based on the received data describing the 3D model (208 ') of the object (100), so that no return channel to the point at which the 3D recording ( 100a, 100b) of the object (100) is generated.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Architecture (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102015210453.2A DE102015210453B3 (de) | 2015-06-08 | 2015-06-08 | Verfahren und vorrichtung zum erzeugen von daten für eine zwei- oder dreidimensionale darstellung zumindest eines teils eines objekts und zum erzeugen der zwei- oder dreidimensionalen darstellung zumindest des teils des objekts |
EP16729218.4A EP3304496B1 (fr) | 2015-06-08 | 2016-06-02 | Procédé et dispositif pour générer des données pour une représentation à deux ou trois dimensions d'au moins une partie d'un objet et pour générer la représentation à deux ou trois dimensions de la partie de l'objet |
PCT/EP2016/062551 WO2016198318A1 (fr) | 2015-06-08 | 2016-06-02 | Procédé et dispositif pour générer des données pour une représentation à deux ou trois dimensions d'au moins une partie d'un objet et pour générer la représentation à deux ou trois dimensions de la partie de l'objet |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16729218.4A Division EP3304496B1 (fr) | 2015-06-08 | 2016-06-02 | Procédé et dispositif pour générer des données pour une représentation à deux ou trois dimensions d'au moins une partie d'un objet et pour générer la représentation à deux ou trois dimensions de la partie de l'objet |
Publications (3)
Publication Number | Publication Date |
---|---|
EP3789962A1 true EP3789962A1 (fr) | 2021-03-10 |
EP3789962B1 EP3789962B1 (fr) | 2024-02-21 |
EP3789962B8 EP3789962B8 (fr) | 2024-05-29 |
Family
ID=56131506
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16729218.4A Active EP3304496B1 (fr) | 2015-06-08 | 2016-06-02 | Procédé et dispositif pour générer des données pour une représentation à deux ou trois dimensions d'au moins une partie d'un objet et pour générer la représentation à deux ou trois dimensions de la partie de l'objet |
EP20204118.2A Active EP3789962B8 (fr) | 2015-06-08 | 2016-06-02 | Procédé et dispositif de génération des données pour une représentation bi ou tridimensionnelle d'au moins une partie d'un objet et de génération de la représentation bi ou tridimensionnelle de l'au moins une partie d'un objet |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16729218.4A Active EP3304496B1 (fr) | 2015-06-08 | 2016-06-02 | Procédé et dispositif pour générer des données pour une représentation à deux ou trois dimensions d'au moins une partie d'un objet et pour générer la représentation à deux ou trois dimensions de la partie de l'objet |
Country Status (4)
Country | Link |
---|---|
US (1) | US10546181B2 (fr) |
EP (2) | EP3304496B1 (fr) |
DE (1) | DE102015210453B3 (fr) |
WO (1) | WO2016198318A1 (fr) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101812001B1 (ko) * | 2016-08-10 | 2017-12-27 | 주식회사 고영테크놀러지 | 3차원 데이터 정합 장치 및 방법 |
US10210320B2 (en) * | 2016-09-21 | 2019-02-19 | Lextron Systems, Inc. | System and method for secure 5-D user identification |
KR20180065135A (ko) * | 2016-12-07 | 2018-06-18 | 삼성전자주식회사 | 셀프 구조 분석을 이용한 구조 잡음 감소 방법 및 장치 |
CN109693387A (zh) | 2017-10-24 | 2019-04-30 | 三纬国际立体列印科技股份有限公司 | 基于点云数据的3d建模方法 |
KR102082894B1 (ko) * | 2018-07-09 | 2020-02-28 | 에스케이텔레콤 주식회사 | 오브젝트 표시 장치, 방법 및 이러한 방법을 수행하는 컴퓨터 판독 가능 매체에 저장된 프로그램 |
WO2021067888A1 (fr) | 2019-10-03 | 2021-04-08 | Cornell University | Optimisation du dimensionnement de soutien-gorge en fonction de la forme 3d des seins |
US11475652B2 (en) | 2020-06-30 | 2022-10-18 | Samsung Electronics Co., Ltd. | Automatic representation toggling based on depth camera field of view |
US12026901B2 (en) | 2020-07-01 | 2024-07-02 | Samsung Electronics Co., Ltd. | Efficient encoding of depth data across devices |
JP6804125B1 (ja) * | 2020-07-27 | 2020-12-23 | 株式会社Vrc | 3dデータシステム及び3dデータ生成方法 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050083248A1 (en) * | 2000-12-22 | 2005-04-21 | Frank Biocca | Mobile face capture and image processing system and method |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
MY124160A (en) * | 1997-12-05 | 2006-06-30 | Dynamic Digital Depth Res Pty | Improved image conversion and encoding techniques |
US7221809B2 (en) * | 2001-12-17 | 2007-05-22 | Genex Technologies, Inc. | Face recognition system and method |
KR100914847B1 (ko) * | 2007-12-15 | 2009-09-02 | 한국전자통신연구원 | 다시점 영상정보를 이용한 삼차원 얼굴 모델 생성방법 및장치 |
KR101758163B1 (ko) * | 2010-12-31 | 2017-07-14 | 엘지전자 주식회사 | 이동 단말기 및 그의 홀로그램 제어방법 |
GB201101810D0 (en) * | 2011-02-03 | 2011-03-16 | Rolls Royce Plc | A method of connecting meshes |
US8587641B2 (en) * | 2011-03-04 | 2013-11-19 | Alexander Roth | Autostereoscopic display system |
US8692738B2 (en) * | 2011-06-10 | 2014-04-08 | Disney Enterprises, Inc. | Advanced Pepper's ghost projection system with a multiview and multiplanar display |
US9196089B2 (en) * | 2012-05-17 | 2015-11-24 | Disney Enterprises, Inc. | Techniques for processing reconstructed three-dimensional image data |
US9311550B2 (en) * | 2013-03-06 | 2016-04-12 | Samsung Electronics Co., Ltd. | Device and method for image processing |
US9378576B2 (en) * | 2013-06-07 | 2016-06-28 | Faceshift Ag | Online modeling for real-time facial animation |
KR102143871B1 (ko) * | 2014-04-22 | 2020-08-12 | 삼성전자 주식회사 | 전자장치의 전원 제어장치 및 방법 |
-
2015
- 2015-06-08 DE DE102015210453.2A patent/DE102015210453B3/de active Active
-
2016
- 2016-06-02 EP EP16729218.4A patent/EP3304496B1/fr active Active
- 2016-06-02 EP EP20204118.2A patent/EP3789962B8/fr active Active
- 2016-06-02 WO PCT/EP2016/062551 patent/WO2016198318A1/fr active Application Filing
-
2017
- 2017-12-08 US US15/835,667 patent/US10546181B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050083248A1 (en) * | 2000-12-22 | 2005-04-21 | Frank Biocca | Mobile face capture and image processing system and method |
Non-Patent Citations (2)
Title |
---|
"Principles of Digital Image Processing: Core Algorithms", 31 December 2009, SPRINGER LONDON, London, ISBN: 978-1-84-800195-4, ISSN: 1863-7310, article WILHELM BURGER ET AL: "Principles of Digital Image Processing: Core Algorithms Chapter 2", pages: 5 - 48, XP055149950, DOI: 10.1007/978-1-84800-195-4_2 * |
WAGENKNECHT ET AL: "A contour tracing and coding algorithm for generating 2D contour codes from 3D classified objects", PATTERN RECOGNITION, ELSEVIER, GB, vol. 40, no. 4, 14 December 2006 (2006-12-14), pages 1294 - 1306, XP005730700, ISSN: 0031-3203, DOI: 10.1016/J.PATCOG.2006.09.003 * |
Also Published As
Publication number | Publication date |
---|---|
EP3304496A1 (fr) | 2018-04-11 |
EP3789962B8 (fr) | 2024-05-29 |
EP3789962B1 (fr) | 2024-02-21 |
US10546181B2 (en) | 2020-01-28 |
DE102015210453B3 (de) | 2016-10-13 |
WO2016198318A1 (fr) | 2016-12-15 |
EP3304496B1 (fr) | 2020-11-04 |
US20180101719A1 (en) | 2018-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3304496B1 (fr) | Procédé et dispositif pour générer des données pour une représentation à deux ou trois dimensions d'au moins une partie d'un objet et pour générer la représentation à deux ou trois dimensions de la partie de l'objet | |
EP3347876B1 (fr) | Dispositif et procédé pour générer un modèle d'un objet au moyen de données-image de superposition dans un environnement virtuel | |
DE112018000311T5 (de) | Stereoskopisches Rendering unter Verwendung von Raymarching und ein Broadcaster für eine virtuelle Ansicht für solches Rendering | |
DE69932619T2 (de) | Verfahren und system zum aufnehmen und repräsentieren von dreidimensionaler geometrie, farbe und schatten von animierten objekten | |
EP3593528A2 (fr) | Dispositif et procédé pour représenter un modèle d'un objet dans un environnement virtuel | |
DE102007045834B4 (de) | Verfahren und Vorrichtung zum Darstellen eines virtuellen Objekts in einer realen Umgebung | |
DE602005006347T2 (de) | System und Verfahren zur Erzeugung einer zweilagigen 3D-Darstellung einer Szene | |
DE69621509T2 (de) | Verfahren zur Auswahl zweier Einzelbilder einer zweidimensionalen Bildsequenz als Basis für die Berechnung der relativen Tiefe von Bildobjekten | |
DE69226512T2 (de) | Verfahren zur Bildverarbeitung | |
EP2206090B1 (fr) | Procédé et dispositif de représentation d'un objet virtuel dans un environnement réel | |
EP3573021B1 (fr) | Visualisation de données d'image tridimensionnelle | |
DE602004012341T2 (de) | Verfahren und System zur Bereitstellung einer Volumendarstellung eines dreidimensionalen Objektes | |
WO2011103865A2 (fr) | Procédé et affichage autostéréoscopique pour la production d'images tridimensionnelles | |
WO2014118145A1 (fr) | Procédé et dispositif de traitement de données d'images tridimensionnelles | |
WO2008025842A1 (fr) | Interface et circuit utilisés en particulier pour des unités de codage holographiques ou des ensembles de reproduction holographiques | |
WO2009118156A2 (fr) | Procédé de production d'une image 3d d'une scène à partir d'une image 2d de la scène | |
DE102015017128A1 (de) | Verfahren und Vorrichtung zum Erzeugen von Daten für eine zwei- oder dreidimensionale Darstellung zumindest eines Teils eines Objekts und zum Erzeugen der zwei- oder dreidimensionalen Darstellung zumindest des Teils des Objekts | |
EP2478705A1 (fr) | Procédé et dispositif de production d'images partielles et/ou d'un modèle d'image stéréoscopique à partir d'une vue 2d pour une reproduction stéréoscopique | |
WO2018133996A1 (fr) | Procédé permettant de combiner une pluralité d'images caméra | |
DE112021003549T5 (de) | Informationsverarbeitungsvorrichtung, informationsverarbeitungsverfahren und programm | |
WO2007085482A1 (fr) | Procédé pour générer et représenter des images perceptibles dans l’espace | |
DE112020001322T5 (de) | Eine szene darstellendes bildsignal | |
DE112020006061T5 (de) | Informationsverarbeitungsvorrichtung und -verfahren, programm und informationsverarbeitungssystem | |
EP0960388B1 (fr) | Procede et dispositif de codification d'une image numerisee | |
EP1600008A1 (fr) | Procede de transmission de donnees d'images sous forme comprimee pour une representation tridimensionnelle de scenes et d'objets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AC | Divisional application: reference to earlier application |
Ref document number: 3304496 Country of ref document: EP Kind code of ref document: P |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210909 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20230904 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AC | Divisional application: reference to earlier application |
Ref document number: 3304496 Country of ref document: EP Kind code of ref document: P |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 502016016387 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PK Free format text: TITEL Ref country code: CH Ref legal event code: PK Free format text: BERICHTIGUNG B8 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20240221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240621 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20240627 Year of fee payment: 9 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240628 Year of fee payment: 9 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240522 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240521 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240521 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240521 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240621 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240522 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20240627 Year of fee payment: 9 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240621 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240621 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240221 |