EP1794718A2 - Visibilitätsdaten-komprimierungsverfahren, komprimierungssystem und decoder - Google Patents
Visibilitätsdaten-komprimierungsverfahren, komprimierungssystem und decoderInfo
- Publication number
- EP1794718A2 EP1794718A2 EP05797404A EP05797404A EP1794718A2 EP 1794718 A2 EP1794718 A2 EP 1794718A2 EP 05797404 A EP05797404 A EP 05797404A EP 05797404 A EP05797404 A EP 05797404A EP 1794718 A2 EP1794718 A2 EP 1794718A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- visibility
- data
- matrix
- lines
- observation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
Definitions
- the invention relates to a compression method and method for decompressing visibility data of a visibility database, as well as systems for implementing these methods.
- a field of application of the invention is the visualization of objects of a scene from a moving observation point in this scene, for example in a simulation or a game, in which the observation point is virtually that of the user moving in the scene.
- the objects are predefined and the observation point can be moved in their environment, which must change the appearance of the objects represented on the screen, for example because they are viewed from a different angle , that they are occulted or that there appear new ones previously hidden.
- a lossy compression algorithm plans to merge rows and columns with a similar set of entries to True.
- Another lossless compression algorithm adds new rows and columns to the visibility table, which are obtained from rows with common entries. As a result, either information is lost in the first algorithm, or the visibility table remains too large.
- these algorithms do not adapt to a network display, that is to say on a remote user station of a base to store the visibility data.
- the following documents may also be cited: - C. Gotsman, O. Sudarsky, J. Fayman, "Optimized Occlusion
- the object of the invention is to obtain a method for compressing visibility data, which overcomes the disadvantages of the state of the art, which is to a large extent without losses but which makes it possible to reduce the size of the compressed data significantly.
- a first object of the invention is a method of compressing visibility data of a visibility database, the visibility data comprising cells of observation points in an object observation space. predetermined and, for each cell of observation points, a list of potentially visible objects from this observation point cell among said predetermined objects, and the visibility data being represented in the database at least in the form of a first visibility matrix of Boolean elements, each line of which corresponds to a cell and each column corresponding to one of said predetermined objects, the Boolean elements located respectively on the ith line and the jth column having the value logic 1 when the object j belongs to the list of potentially visible objects associated with the cell i, and having the logic value 0 otherwise, characterized in that one automatically detects in the database if lines included in the visibility matrix have a high number of common elements and are not neighbors and, in case of detection, lines are exchanged to do so.
- a digital image coding is automatically applied to the Boolean elements of the modified visibility matrix to obtain a visibility data code, these Boolean elements of the modified visibility matrix forming the pixels of the digital image for coding.
- the visibility database being associated with a data processing system, the system transmits the generated visibility data code to an operating unit for visibility data;
- the data processing system is a data server and the visibility data operating unit is a visibility data exploitation station;
- the data processing system and the visibility data operating unit are provided on the same visibility data exploitation station;
- the following steps are automatically performed in the data processing system: determination among the regions of the observation space, of an observation region to which the observation point belongs, corresponding to the position information received from the organ, determining, among the cells of the visibility data, first cells having an intersection with the observation region, extracting, from the first visibility matrix, a second visibility matrix, whose lines are formed by the lines of the first visibility matrix, which correspond to said first determined cells of the observation region, the modified visibility matrix being formed from the second visibility matrix;
- a list of the potentially visible objects from the observation region which is sent by the data processing system with the data code of the data processing system, is automatically determined in the data processing system from the second visibility matrix; visibility to the operating body; a list of the identifiers of the cells of the modified visibility matrix is sent by the data processing system with the visibility data code to the operating member;
- the identical lines of the visibility matrix are automatically determined in the data processing system, so as to keep from this visibility matrix in a fourth visibility matrix only one line per group of identical lines, the matrix modified visibility profile being formed from the fourth visibility matrix, and automatically calculating in the data processing system a datum of the number of groups of identical lines of the visibility matrix, and for each group of identical lines a datum of group, comprising a number of identical rows of the group and a list of the identifiers of the identical lines of said group, said data of number of identical groups of lines and said group data being sent by the data processing system with the data code of visibility to the operating body;
- the number of identical lines of said group is coded on a number fixed in advance of bits and each identifier of the identical lines of said list of said group is coded on a number fixed in advance of bits each group data being constituted, in an order prescribed for all the groups, of said number of identical lines of the group and of said list of identifiers of the identical lines of the group;
- the column of 0 detected is automatically removed from the visibility matrix to form a third visibility matrix, the modified visibility matrix being formed from the third visibility matrix;
- a second object of the invention is a method for decompressing the visibility data code generated by the visibility data compression method as described above, characterized in that the data code of visibility generated is applied to , a digital image decoding, inverse of the digital image coding applied by said compression method, this decoding producing a visibility matrix.
- the columns of 0 are automatically added to the visibility matrix obtained by said digital image decoding at places of non-potentially visible objects indicated by the list of potentially visible objects from the observation region;
- the invention also relates to a decoder, comprising means for the implementation of the visibility data decompression method as described above.
- Another object of the invention is a system associated with a visibility database, the visibility data comprising observation point cells in a predetermined object observation space and, for each point cell of observation, a list of potentially visible objects from this observation point cell among said predetermined objects, and the visibility data being represented in the database at least in the form of a first Boolean element visibility matrix, whose each line corresponds to a cell and each of which column corresponds to one of said predetermined objects, the Boolean elements located respectively on the ith line and the jth column having the logic value 1 when the object j belongs to the list of potentially visible objects associated with the cell i, and having the logical value 0 otherwise, the system comprising means for implementing the method of compressing the visibility data as described above.
- Another object of the invention is a computer program comprising instructions for implementing the data compression method as described above, when executed on a computer.
- Another object of the invention is a computer program comprising instructions for implementing the data decompression method as described above, when executed on a computer.
- FIG. 1 diagrammatically represents a client-server device in which the data compression and decompression methods according to the invention are implemented
- FIG. 2 represents the modeling of a three-dimensional object in two and a half dimensions
- FIG. 3 is a top view of a scene comprising objects and view cells of these objects
- FIG. 4 is a top view of the shadow volumes generated by blackout facades, which can be used to calculate a list of potentially visible objects behind these facades,
- FIG. 5 represents in perspective a horizon of visibility generated by the calculation method illustrated in FIG. 4;
- FIG. 6 represents a flowchart of one embodiment of the visibility data compression method according to the invention
- FIG. 7 is a top view of the scene of FIG. 3 comprising objects and divided into regions;
- FIG. 8 represents the visibility data structure compressed by the compression method according to FIG. 6;
- FIG. 9 represents a flowchart of one embodiment of the visibility data decompression method according to the invention.
- FIG. 10 represents a two-dimensional view cell of a scene
- FIG. 11 represents a three-dimensional view cell of a scene
- FIG. 12 shows the five dimensions of a 5D view cell of a scene.
- the invention is described below in one embodiment of devices for implementing the visualization of objects, adopting for example the client-server architecture, in which a SERV server houses in BD database the visibility data DV objects.
- the objects correspond to a set of geometric data DVOBJ modeling a real or abstract object.
- the server SERV having data processing means in its calculator CAL, it also forms a SERV system for data processing.
- a user's CL station, remote from the SERV server, is connected to it by a telecommunication CT channel to send requests to it, this station CL then being a data exploitation member.
- the server SERV returns on the CT channel responses to these requests to the CL station.
- the CT channel for example uses one or more telecommunication network (s), the server
- the base BD and the data processing system SERV could also be present on the same machine as the CL station, for example on a computer of the user.
- the base BD is for example present on a mass memory of the computer (for example hard disk or others) or on a removable storage medium (for example CD-ROM or others) introduced in a corresponding reader of the computer, and the compressed data must be produced in order to be exploited more quickly by an organ of the computer. computer.
- the exploitation of compressed visibility data consists in displaying a determined image from them.
- the CT station has an IUT user interface, comprising an ECR screen for reproducing an image and for example also speakers.
- the image displayed on the ECR screen is computed by a rendering MR engine, based on the data received on the network interface INTC and, if necessary in addition, from a local data base BL present in the item CL .
- the user interface IUT further comprises COM commands, such as for example a joystick, allowing the user to move a cursor or others of the ECR screen in a scene or environment SC, whose image is displayed on this ECR screen.
- COM commands such as for example a joystick
- Such an action is called navigation in the scene.
- the scene SC is in two or three dimensions, the description being made in what follows for a three-dimensional scene.
- the description made for the three-dimensional scene is applicable to a two-dimensional scene, with the corresponding adaptations.
- the image displayed on the ECR screen is the scene SC seen from an observation point P at the cursor, which is thus moved in the scene SC with each navigation.
- the navigation requires to recalculate the image, which will be modified compared to the image previously displayed on the ECR screen.
- the space in which the observation point P can be moved in the scene SC is called the observation or navigation EO space.
- the scene SC comprises a plurality p of objects OBJ present, observation EO space then being formed by or gaps between and around the objects OBJ.
- the space EO is for example equal to the complement of the union of all OBJ objects.
- OBJ objects are by example of buildings in a city, the EO observation space then being formed for example by streets located between the buildings.
- OBJ objects are represented by a 2.5D model, namely an EMPR footprint on the ground, an altitude ALT and a height H along the vertical ascending z axis.
- a three-dimensional OBJ object is deduced from the OBJ object in 2.5D by elevation of a height prism H on its footprint EMPR on the ground, from its altitude ALT, according to FIGS. 2 and 10.
- the observation space EO is for example located on the ground, may have different altitudes from one place to another.
- the visibility DV data includes the object-specific DVOBJ geometrical data, ie in the above example the EMPR footprint data, such as, for example, their shape, ALT altitudes and H heights, as well as the coordinates of OBJ objects in the above example. the scene SC, namely in the example above the coordinates of the footprints EMPR.
- the visibility DV data also includes the DVEO data specific to the observation EO space.
- the station CL sends to the server SERV by the channel CT a request REQV in visualization of a new image of the scene SC, comprising information INFP of position of the point P of observation (for example its coordinates) in the space EO d 'observation.
- the visual REQV request is emitted for example automatically with each navigation, each time the observation point P has been moved from a starting position, which is the position corresponding to the previous image displayed on the screen ECR, at another position in the SC scene, for which the image must be recalculated to be displayed on the ECR screen.
- a CAL calculator of the server SERV determines from the visibility DV data stored in the database BD and from the position INFP information of the observation point P contained in this REQV request, the data DE visibility to be sent in an RV response to the CL station by the CT channel. These DE data will be sent in packets to the CL station during a step E9, which will exploit them by its rendering MR engine and its IUT interface.
- the response RV calculated by the server SERV is stored therein.
- the observation EO space is subdivided into viewpoint or viewpoint cells i, having pre-calculated coordinates and associated with their individual identifiers i in the database BD and in the local database BL of the station CL.
- the cells are at least two-dimensional. In FIGS. 2, 3, 4, 5 and 7, the cells are two-dimensional (2D), but the cells could also be three-dimensional (3D) as described below with reference to FIG. 11, or five dimensions (FIG. 5D) as described below with reference to FIG. 12.
- the view cells i are obtained for example by two-dimensional triangulation, as is shown in FIG. 3, the vertices of the triangles forming the view cells being then formed by the SEMPR vertices of the corners of the EMPR footprints of the OBJ objects and the points from the edge of the SC scene.
- the strong lines LIMi represent the limits of the cells i and are thus formed by the rectilinear segments forming the sides of the triangular cells.
- the three-dimensional view cells are obtained by vertical elevation of a prism on the triangles.
- a view cell is a region of an environment that contains visibility information about the rest of the environment, which is identical regardless of the point of view in the cell.
- the dimension of the view cell is deduced. Observation points can be described in two dimensions (2D) or in three dimensions (3D). If the observation direction is not taken into account in the visibility calculation, no additional dimension will be assigned to the view cell and this one will have the dimension of the observation point, that is to say two or three.
- FIGS. 2, 3, 4, 5, 7, there is described a representation in which the view cells have two dimensions along the x and y axes, since the view cells are formed by elevating the triangles forming the and visibility data are calculated for all viewing directions from an observation point.
- the navigation height is not taken into account because it is in the case of ground navigation and calculations were made with this assumption.
- a third dimension is added to the view cells of the example given previously, by calculating the visibility information for different heights of observation points along the z axis.
- the view cell contains observation points of dimension 3, since the visibility information will be different depending on the height of the points in the observation space.
- View cells are presented as a stack of prisms. The lowest view cell C1 (corresponding to the lowest prism) will give visibility for altitudes below the height h1 of the latter, and the view cell C2 (corresponding to the highest prism) located above cell C1 will give visibility for altitudes above the height h1.
- Figure 12 is shown the five-dimensional coordinate system of a view cell 5D.
- Each view cell contains the visibility information for observation points P (X such that x1 ⁇ X ⁇ x2, Y such that y1 ⁇ Y ⁇ y2, Z such that z1 ⁇ Z ⁇ z2) and for directions of visibility.
- the base BD contains as visibility data DV, in addition to the cells i and the OBJ objects, lists LOPV (i) potentially visible objects from each cell i, also pre - calculated.
- LOPV (i) lists of potentially visible objects from each cell i
- the map of the minimum visible heights of the objects with respect to this cell is calculated.
- This map is obtained by a grid of the floor area occupied by the scene and containing for each rectangle r the minimum height Hmin (r) in all the rectangle with respect to the cell.
- This card ensures that if an object is fully enclosed in a rectangle r and if its height is less than Hmin (r), then the object will not be seen from the cell.
- This map is represented using a two-dimensional matrix, called MV visibility matrix.
- Hmin (r) is calculated as follows.
- the shadow OMB volume is called the truncated pyramid defined between the four half-lines passing through the observation point P and the four vertices of the facade, as shown in FIG. 4.
- This shadow OMB volume ensures that any object entirely contained in this three-dimensional volume OMB is not visible from the point P observation.
- the facade is then called blackout facade.
- the matrix MV of the minimum heights Hmin (r) for the observation point P is deduced from the height of the shadow volumes in each rectangle r of the matrix.
- the shaded rectangles represent the shadow volumes OMB generated by the two facades 01 and 02.
- the rectangle r1 located outside the OMB volumes will have a minimum height Hmin (r1) zero above the proper altitude of the rectangle r1, while the rectangle r2 located in the shadow OMB volume generated by the facade 01 will have a minimum height Hmin (r2), which will be greater than zero above the proper altitude of the rectangle r2 and which will pass over the pyramid of the shadow volume of 01 and which can be calculated from the defined position and shape of this pyramid.
- Hmin (r) min ⁇ H p (r), p belonging to PE ⁇ ,
- H p (r) max ⁇ H Pi0 (r), o belonging to FO ⁇ being the function representing the minimum height visible from the rectangle r with respect to the observation point P,
- H Pi0 (r) being the function representing the minimum height visible from the rectangle r with respect to the observation point P and taking into account only the occulting facade o.
- the MV matrix of the minimum heights Hmin (r) makes it possible to calculate an HV horizon of visibility, as is shown in FIG. 5.
- This horizon HV of visibility is defined for example around the visibility matrix MV.
- This visibility horizon HV ensures that any object outside the area covered by the visibility matrix MV and which projects below the visibility horizon HV from the box Bi enclosing the cell i is not seen from this cell i, and that this is the case in FIG. 5 for the object OBJ 1 having the projection POBJ 1 under the horizon HV with respect to the box Bi.
- the lists LOPV (i) of potentially visible objects from the cells i are represented in the database BD by a first matrix Mnp of visibility with n lines and p columns of boolean element of logical value 1 or
- each line of which corresponds to a determined cell i and each column j corresponds to a determined OBJ object
- the invention provides means for automatically compressing in the server SERV this matrix Mnp of visibility of the base BD, for example by the calculator CAL.
- the lines i of the visibility matrix Mnp are ordered so as to increase the coherence or to have as much coherence as possible between the neighboring rows, to then apply during a step E8 a digital image coding to the thus modified visibility matrix, the 1 and 0 of the visibility matrix then forming the pixels of the digital image for coding.
- the CAL calculator of the SERV server has for this purpose a corresponding encoder. It is thus formed in the matrix of visibility, by displacement of its lines, zones full of 1 and zones full of 0, in the manner of a digital image in black and white, whose black would be represented by 0 and the white by 1.
- step E7 it is detected whether lines i of the visibility matrix have a large number of common elements (number of 0 and 1 common).
- the detection of similar lines is performed for example by applying the exclusive OR operator (XOR) to any or all possible combinations of two rows of the matrix.
- the two lines il and i2 will have a large number of common elements if (i1) XOR (i2) has a large number of 0.
- the detection of common elements is executed for each line successively with each subsequent row of the matrix. We group together the lines with the most common points.
- the visibility matrix thus modified will then be coded according to a digital image coding.
- the coding performed could be any other coding taking into account the contents of the thing to be coded and can be adaptive.
- digital images usually have some continuity between neighboring lines, due to the fact that the same single-color area of the image is most often represented by pixels of neighboring lines having sensibly the same value.
- This digital image coding is applied to the visibility matrix, which does not code in itself an image but data formed by lists of potentially visible objects LOPV (i).
- the Boolean elements of the matrix will then form the pixels that can only have level 1 or level 0, to which the digital image coding will be applied.
- This digital image coding has nothing to do with the image to be displayed on the ECR screen of the user's CL station, which image may or may not be digital, but will produce a visibility data code I or Pseudo - digital image code I, since it is not applied to a digital image.
- This digital image coding is, for example, of the JBIG (Joint Bi-Level Image Experts Group) type, designed for black and white and lossless images.
- the JBIG coding takes into account the content of the image to be encoded and involves an adaptive arithmetic coder used to predict and encode the current data according to the previously coded data.
- the JBIG coder models the redundancy in the image to be encoded, by considering the correlation of the pixels being coded with a set of pixels called templates.
- a template could be for example the 2 previous pixels in the line and the 5 pixels centered above the current pixel in the top line.
- JBIG coding can use even more precise adaptive templates, as well as an arithmetic encoder coupled to a high-performance probabilistic estimator.
- the coding performed could be of PNG type ("Portable Network Graphics"), which is intended for graphic formats and is specifically designed for the Internet network.
- the row ordering step E7 can be performed directly on the visibility matrix Mnp or on a matrix Msr different from the visibility matrix Mnp and obtained from it, as will be seen in FIG. the embodiment of the method, described below with reference to FIG.
- a second step E2 is executed, during which the scene SC is broken down into regions R.
- These regions R therefore cover the observation space EO and the objects OBJ, as this is shown in Figure 7 for the same SC scene as in Figure 3.
- the R regions are generally larger in size than the cells, so that substantially each R region at least partially covers several cells.
- the regions R have for example all the same shape, independently of the SC scene, and are in particular in Figure 7 of the rectangular blocks.
- These rectangular blocks R have for example, as shown in Figure 7, their sides parallel to the box encompassing the scene and form vertical elevation of parallelepiped blocks.
- the strong lines LIMR represent the limits of the rectangular blocks R and are thus formed by rectilinear segments forming their sides.
- the coordinates of the regions R are recorded in the database BD.
- the region R to which the observation point P corresponds to the position INFP information contained in the REQV request is determined.
- This region will be called the PR region of observation in the following.
- the coordinates ( xminRP, xmax RP, yminRP, ymaxRP) of the block RP will be determined by comparison with (xP, yP) according to xminRP ⁇ xP ⁇ xmax RP, and yminRP ⁇ yP ⁇ ymaxRP.
- step E3 It is then determined in step E3 from the coordinates of the cells i and the coordinates of the observation RP region, the cells, called iRP, having a non-zero intersection with the RP region of the point P, which are for example, the number of m, m being less than or equal to n.
- a second visibility matrix Mmp is then formed from the first visibility matrix Mnp having m lines formed by the Mnp cells corresponding to the m RP-cells of the RP region and the p columns of Mnp. As a result, the Mmp matrix will be much smaller in size than the Mnp matrix.
- a list Tp of the visible OBJ objects of the observation RP region is determined from the second matrix Mmp.
- an object j visible from one of the iRP cells (corresponding to a 1 in the line for this object) will belong to this list Tp.
- This list Tp is obtained for example by making the logical sum by a logical OR operator of the lines of Mmp.
- This list Tp is for example formed of a table of objects in the RP region, of p bits, of which each bit j is equal to 1 when one or more 1 are present in the column j of Mmp (visible object j of one or more cells iRp) and each bit j is equal to 0 when the column j of Mmp has only 0 (object j not visible iRp cells).
- the columns j formed only of 0, which correspond to non-visible objects from the RP region of the point P are removed from the second matrix Mmp determined in step E3. observation to obtain a third visibility matrix Mmr, where r ⁇ p.
- the position of the deleted columns is identified.
- step E6 of reduction of matrix size we search if the third visibility matrix Mmr has identical iRP lines. If at least two lines i3 and i4 are identical, only one i3 of these will be kept in a fourth visibility matrix Msr obtained from the third visibility matrix Mmr, with s ⁇ m.
- GR group As shown in FIG. 8, the number NGR of identical iRP line groups is for example coded on 8 bits, each number t of identical lines of each group DGR data is coded on 8 bits and each list LID of the identifiers i3, i4 identical lines of the group GR is coded on t * k bits, where k is a number of bits set to encode an identifier i3 or i4 of line or cell.
- the DGR data is coded one after the other.
- the passage of a first datum DGR1 from a first group GR1 to a second datum DGR2 of a second group GR2, according to the first datum DGR1 will be decoded by the fact that it is not expected after the number t1 of lines identical to DGR1, 8 bits long, than t1 * k bits for the list LID1 identifiers of the identical lines of the GR1 group and after these t1 * k bits begins the number t2 of identical lines of DGR2, 8 bits long, tracking the t2 * kbits of the list LID2 identifiers identical lines of DG R2, t2 being the number of identical rows of DGR2.
- This fifth visibility matrix M'sr is called the visibility matrix M'sr modified compared to the first matrix Mnp of visibility of origin.
- the iRP identifiers of lines il, i2 following the fifth matrix M'sr of visibility are recorded, in their order of appearance (for example from top to bottom) in this fifth matrix M'sr, in a list LIOC, called list of identifiers M, i2 ordered cells for the digital image coding of the fifth matrix M'sr or list LIOC identifiers il, i2 cells of the modified visibility matrix M'sr.
- This LIOC list of the identifiers of the ordered cells is distinct from the LID lists of the identical cells of the GR groups.
- the LIOC list of the identifiers of the ordered cells is coded on s * k bits, where k is a fixed number of bits to encode an identifier M or i2 of line or cell.
- step E7 applied to the matrix Msr of the example above will give for example the visibility matrix M'sr
- This matrix M'sr has the following solid zone of 1:
- Steps E3, E4, E5, E6, E7, E8, E9 are executed for each observation point P indicated in the REQV request and for each region RP of membership of the observation point P, having been determined. If necessary, the order of steps E5 and E6 can be switched.
- the data DE of the response RV comprises, in FIG. 1, visibility DVE data proper and geometric DGE data of the database BD, equal to the DVOBJ data concerning the OBJ objects visible from the observation RP region.
- This DGE data is selected by the server SERV calculator CAL in the data BDOBJ BD base for example during step E4 or E5, using the Tp list of OBJ objects visible from the region RP.
- DVE visibility data is determined by the calculator
- the pseudo-digital image code I equal to the fifth matrix M'sr encoded by the digital image coding by the step E8.
- the following table shows the test results of the compression process described above. These tests were carried out on a BD database describing a virtual city of 5750 objects and 29153 cells of view.
- the method according to the invention allows a lossless compression of the visibility data, according to the test 2, more important than the lossless coding performed according to the test 1 from a lossless coding of the state of the technique.
- the CL station of the user having received the response RV containing the visibility data DVE implements a method of decompression of the visibility data DVE, as is represented in FIG. 9.
- the station CL comprises for this purpose means to automatically decompress the DVE data, for example by a calculator CAL2.
- the station CL having in mind that the Tp list of OBJ objects visible from the RP region of the observation point P is coded on p bits, that the number NGR of groups GR of identical lines is coded on 8 bits, that the DGR data (t, LID) of groups of identical lines, are each encoded on 8 bits + tk bits, and that the list LIOC of the identifiers of the cells 11, i2 ordered is coded on s * k bits, the item CL extracted from the RV response received this list Tp, this number NGR, these data DGR (t, LID), this list LIOC and the code I pseudo - digital image and the records in its local database BL during a first extraction step E11.
- the DGE data of the objects of the response RV are also stored in the local database BL.
- a sixth Boolean matrix MmpO is constructed having m rows and p columns, m and p being determined from the DVE data of the response RV, namely from the Tp list. for p and from the lists LID of the identifiers i3, i4 of identical lines of the groups GR and of the list LIOC of the identifiers of the cells il, ⁇ 2 ordered for m.
- step E13 the code I is decoded in the station CL according to a digital image decoding, the reverse of that used for the digital image coding of the step E8 in the SERV server, the characteristics of this decoding being present in a memory of the station CL.
- This decoding therefore produces the fifth visibility matrix M'sr of step E7.
- the columns of the fifth matrix M'sr of visibility are transferred to the empty columns of the sixth matrix MmpO, located at the places of the 1s of the visible objects list Tp, that is, in the previous example, in places not filled by columns of 0, to obtain a seventh matrix MmpO '.
- This filling is done in the same order for the columns of the fifth matrix M'sr and for the empty columns of the sixth matrix MmpO. Since the fifth matrix M'sr has a smaller number of lines than the sixth matrix MmpO, only the first and second lines coming from M'sr are filled in the sixth matrix MmpO.
- the list LIOC of the identifiers il, i2 of the ordered cells indicates the identifiers of the first lines thus filled with the seventh matrix MmpO '. Thus, it is not necessary to perform the inverse permutations of those performed in step E7, since the identifiers of the filled lines of MmpO are known by LIOC.
- step E15 the duplicate lines, defined by the number NGR of identical line groups and the data DGR (t, LID) of groups, are added to the seventh matrix MmpO 'obtained in step E14. get an eighth matrix Mmp '.
- the added lines will replace the unfilled lines of the seventh matrix MmpO '.
- the DGR data is decoded as follows in step E6, using the NGR number of identical line groups, and the number of lines. Identical LIDs and LIDs of identifiers of identical lines, contained in the DGR data.
- the station CL stores in its local database the eighth matrix Mmp 'obtained by the step E15.
- the station CL searches to which iRP cell of the eighth matrix Mmp 'the observation point P belongs, from the coordinates and identifiers of the cells i present in the local database BL and from the INFP position information of the observation point P, having been determined by the calculator CAL2 for sending the previous REQV request and having also been stored in the local database BL with this REQV request sent.
- the cell i of the point P thus found will then for example have the identifier iP in the local base BL.
- the calculator CAL2 determines the position of the line corresponding to the cell iP in the eighth matrix Mmp 'of the base BL, from this identifier iP and from the list LIOC of the identifiers of the cells M, ⁇ 2 ordinate, registered in the local BL base.
- the LOPV (iP) list of the objects visible from the observation point P will be obtained by the calculator CAL2 by traversing the line IP of the eighth matrix Mmp 'of the base BL.
- the station CL will then use, during step E17, the list LOPV (iP) of the objects visible from the observation point P and the DVOBJ data of the corresponding objects to display on the ECR screen a corresponding image, which will represent the objects OBJ seen from the observation point P, which will be those defined by the DVOBJ data for which a 1 appears in this list LOPV (iP) and which will not represent the objects OBJ not seen since the point P of observation, which will be those defined by the DVOBJ data for which a 0 appears in this list
- the observation point P can be moved by the user on the station CL to a new point P2.
- the station CL is automatically determined whether the new observation point P2 is in an iRP cell already present in a matrix Mmp 'recorded in the local base BL. In the affirmative, the corresponding matrix Mmp 'is sought in the base BL of the station CL to perform again the stage E16 for this cell iRP of this matrix Mmp' and the stage E 17. If not, a new request REQV2 containing the INFP2 position information of the new observation point P2 is sent by the CL to the SERV server, which returns a new response RV2 according to the process described above. For each new RV2 response, the SERV server determines whether data
- the new observation point P2 is automatically determined on the station CL in an iRP cell already present in the eighth matrix Mmp 'obtained by the previous response RV, having made it possible to display the preceding image for the point P. If it is, it does not need to emit a new request REQV2 for the new point P2.
- the LOPV list (iP2) for the new observation point P2 can be deduced from the list LOPV (iP) for the previous observation point P, by adding a list LOPV (iP2) + additional objects with respect to the list LOPV (iP) for the preceding observation point P and removing a LOPV list (iP2) "objects less to the list LOPV (iP) for the preceding observation point P.
- the visibility data are, for example, compressed in advance in the form of the matrix M'sr. for each of the regions R, before the SERV system receives the position information INFP from the observation point.
- computer programs are installed on the SERV system and on the device CL for executing the compression process on the SERV system and for performing the decompression method on the device CL.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0409234 | 2004-08-31 | ||
PCT/FR2005/050678 WO2006027519A2 (fr) | 2004-08-31 | 2005-08-19 | Procede de compression de donnees de visibilite, de decompression, decodeur, systeme de compression |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1794718A2 true EP1794718A2 (de) | 2007-06-13 |
Family
ID=34948303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP05797404A Withdrawn EP1794718A2 (de) | 2004-08-31 | 2005-08-19 | Visibilitätsdaten-komprimierungsverfahren, komprimierungssystem und decoder |
Country Status (6)
Country | Link |
---|---|
US (1) | US8131089B2 (de) |
EP (1) | EP1794718A2 (de) |
JP (1) | JP4870079B2 (de) |
CN (1) | CN100594518C (de) |
BR (1) | BRPI0514776A (de) |
WO (1) | WO2006027519A2 (de) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4947376B2 (ja) * | 2007-12-26 | 2012-06-06 | アイシン・エィ・ダブリュ株式会社 | 3次元データ処理装置、3次元画像生成装置、ナビゲーション装置及び3次元データ処理プログラム |
WO2009120137A1 (en) * | 2008-03-25 | 2009-10-01 | Telefonaktiebolaget L M Ericsson (Publ) | Method for automatically selecting a physical cell identity (pci) of a long term evolution (lte) radio. cell |
FR2948792B1 (fr) * | 2009-07-30 | 2011-08-26 | Oberthur Technologies | Procede de traitement de donnees protege contre les attaques par faute et dispositif associe |
US9171396B2 (en) * | 2010-06-30 | 2015-10-27 | Primal Space Systems Inc. | System and method of procedural visibility for interactive and broadcast streaming of entertainment, advertising, and tactical 3D graphical information using a visibility event codec |
US20150237323A1 (en) * | 2012-07-23 | 2015-08-20 | Thomlson Licensing | 3d video representation using information embedding |
CN105718513B (zh) * | 2016-01-14 | 2019-11-15 | 上海大学 | jpg文件的压缩方法及解压缩方法 |
US10878859B2 (en) | 2017-12-20 | 2020-12-29 | Micron Technology, Inc. | Utilizing write stream attributes in storage write commands |
US11803325B2 (en) * | 2018-03-27 | 2023-10-31 | Micron Technology, Inc. | Specifying media type in write commands |
CN110992469B (zh) * | 2019-11-29 | 2024-01-23 | 四川航天神坤科技有限公司 | 海量三维模型数据的可视化方法及系统 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS558182A (en) * | 1978-07-05 | 1980-01-21 | Sony Corp | Color television picture receiver |
GB2038142B (en) * | 1978-12-15 | 1982-11-24 | Ibm | Image data compression |
US6388688B1 (en) * | 1999-04-06 | 2002-05-14 | Vergics Corporation | Graph-based visual navigation through spatial environments |
US6633317B2 (en) | 2001-01-02 | 2003-10-14 | Microsoft Corporation | Image-based walkthrough system and process employing spatial video streaming |
US7533088B2 (en) * | 2005-05-04 | 2009-05-12 | Microsoft Corporation | Database reverse query matching |
-
2005
- 2005-08-19 EP EP05797404A patent/EP1794718A2/de not_active Withdrawn
- 2005-08-19 WO PCT/FR2005/050678 patent/WO2006027519A2/fr active Application Filing
- 2005-08-19 BR BRPI0514776-0A patent/BRPI0514776A/pt not_active IP Right Cessation
- 2005-08-19 US US11/661,029 patent/US8131089B2/en not_active Expired - Fee Related
- 2005-08-19 JP JP2007528942A patent/JP4870079B2/ja not_active Expired - Fee Related
- 2005-08-19 CN CN200580036403A patent/CN100594518C/zh not_active Expired - Fee Related
Non-Patent Citations (1)
Title |
---|
See references of WO2006027519A2 * |
Also Published As
Publication number | Publication date |
---|---|
US20070258650A1 (en) | 2007-11-08 |
WO2006027519A2 (fr) | 2006-03-16 |
CN100594518C (zh) | 2010-03-17 |
CN101061514A (zh) | 2007-10-24 |
JP4870079B2 (ja) | 2012-02-08 |
JP2008511888A (ja) | 2008-04-17 |
BRPI0514776A (pt) | 2008-06-24 |
WO2006027519A3 (fr) | 2006-07-06 |
US8131089B2 (en) | 2012-03-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1794718A2 (de) | Visibilitätsdaten-komprimierungsverfahren, komprimierungssystem und decoder | |
Wimmer et al. | Instant Points: Fast Rendering of Unprocessed Point Clouds. | |
Musialski et al. | A survey of urban reconstruction | |
US9208612B2 (en) | Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information | |
Lafarge et al. | A hybrid multiview stereo algorithm for modeling urban scenes | |
FR2852128A1 (fr) | Procede pour la gestion de la representation d'au moins une scene 3d modelisee. | |
WO2007042667A1 (fr) | Procedes, dispositifs et programmes de transmission d'une structure de toit et de construction d'une representation tridimensionnelle d'un toit de batiment a partir de ladite structure | |
WO2001022366A1 (fr) | Procede de construction d'un modele de scene 3d par analyse de sequence d'images | |
JP2001501348A (ja) | 3次元シーンの再構成方法と、対応する再構成装置および復号化システム | |
Gao et al. | Visualizing aerial LiDAR cities with hierarchical hybrid point-polygon structures | |
Andújar et al. | Visualization of Large‐Scale Urban Models through Multi‐Level Relief Impostors | |
Cignoni et al. | Ray‐casted blockmaps for large urban models visualization | |
Rüther et al. | Challenges in heritage documentation with terrestrial laser scanning | |
Cui et al. | LetsGo: Large-Scale Garage Modeling and Rendering via LiDAR-Assisted Gaussian Primitives | |
Décoret et al. | Billboard clouds | |
EP1654882A2 (de) | Verfahren zum repräsentieren einer bildsequenz durch verwendung von 3d-modellen und entsprechende einrichtungen und signal | |
EP1141899B1 (de) | Verfahren zur quellnetzvereinfahcung mit berücksichtigung der örtlichen krümmung und der örtlichen geometrie, und dessen anwendungen | |
Guinard et al. | Sensor-topology based simplicial complex reconstruction from mobile laser scanning | |
Vallet et al. | Fast and accurate visibility computation in urban scenes | |
EP1121665B1 (de) | Verfahren zur quellnetzkodierung mit optimierung der lage einer ecke die sich aus einer kantenzusammenführung ergibt und deren anwendung | |
WO2006003268A1 (fr) | Precede general de determination de liste d’elements potentiellement visibles par region pour des scenes 3d de tres grandes tailles representant des villes virtuelles d’altitudes variables. | |
Karner et al. | Virtual habitat: Models of the urban outdoors | |
Tudor et al. | Rapid high-fidelity visualisation of multispectral 3D mapping | |
EP2192555B1 (de) | Display of parameterised data | |
Cheng et al. | Associating UAS images through a graph‐based guiding strategy for boosting structure from motion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20070321 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20071120 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ORANGE |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20150820 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20160105 |