WO2016137888A1 - Visualizing datasets - Google Patents

Visualizing datasets Download PDF

Info

Publication number
WO2016137888A1
WO2016137888A1 PCT/US2016/018936 US2016018936W WO2016137888A1 WO 2016137888 A1 WO2016137888 A1 WO 2016137888A1 US 2016018936 W US2016018936 W US 2016018936W WO 2016137888 A1 WO2016137888 A1 WO 2016137888A1
Authority
WO
WIPO (PCT)
Prior art keywords
volume
slice
cross
volume slice
cursor position
Prior art date
Application number
PCT/US2016/018936
Other languages
French (fr)
Inventor
Cen Li
Bruce Cornish
Zhenghan Deng
Original Assignee
Schlumberger Technology Corporation
Schlumberger Canada Limited
Services Petroliers Schlumberger
Geoquest Systems B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Schlumberger Technology Corporation, Schlumberger Canada Limited, Services Petroliers Schlumberger, Geoquest Systems B.V. filed Critical Schlumberger Technology Corporation
Publication of WO2016137888A1 publication Critical patent/WO2016137888A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/008Cut plane or projection plane definition

Definitions

  • Patent Application No. 62/119,597 filed February 23, 2015, entitled “APPROACHES TO VISUALIZING DATASETS,” which is hereby incorporated by reference in its entirety.
  • embodiments of visualizing datasets relate to a method for performing a field operation of a field.
  • the method includes obtaining a volume dataset including data items of the field, where each of the data items is assigned to a position in a three dimensional (3D) volume and corresponds to a location in the field.
  • the method further includes obtaining a cursor position in a first volume slice of the volume dataset, and extracting, from the volume dataset and in response to obtaining the cursor position, a second volume slice of the volume dataset.
  • the first volume slice corresponds to at least a first portion of a first cross-section of the volume
  • the second volume slice corresponds to a second portion of a second cross-section of the volume.
  • the second volume slice intersects the first volume slice at the cursor position.
  • the method further includes displaying the first volume slice on a display, and displaying the second volume slice on the display, the second volume slice blocking, in the display, a third portion of the first volume slice a distance away from the cursor position.
  • FIG. 1.1 is a schematic view, partially in cross-section, of a field in which one or more embodiments of visualizing datasets may be implemented.
  • FIG. 1.2 shows a schematic diagram of a system in accordance with one or more embodiments.
  • FIG. 2 shows a flowchart in accordance with one or more embodiments.
  • 3.14 show an example in accordance with one or more embodiments.
  • FIGS. 4.1 and 4.2 show systems in accordance with one or more embodiments.
  • embodiments of the present disclosure provide methods, computing systems, and computer-readable media for visualizing a dataset for a volume.
  • a cross-cutting panel is provided that allows simultaneous visualization of data in multiple directions at a point in the volume. The approach may allow for improved understanding of a volume and more efficient interpretation of the volume.
  • FIG. 1.1 depicts a schematic view, partially in cross section, of a field (100) in which one or more embodiments of visualizing datasets may be implemented.
  • one or more of the modules and elements shown in FIG. 1.1 may be omitted, repeated, and/or substituted. Accordingly, embodiments of visualizing datasets should not be considered limited to the specific arrangements of modules shown in FIG. 1.1.
  • the field (100) includes the subterranean formation
  • the subterranean formation (104) includes several geological structures, such as a sandstone layer (106-1), a limestone layer (106-2), a shale layer (106- 3), a sand layer (106-4), and a fault line (107).
  • data acquisition tools (102-1), (102-2), (102-3), and (102-4) are positioned at various locations along the field (100) for collecting data of the subterranean formation (104), referred to as survey operations.
  • the data acquisition tools are adapted to measure the subterranean formation (104) and detect the characteristics of the geological structures of the subterranean formation (104).
  • data plots (108-1), (108-2), (108-3), and (108-4) are depicted along the field (100) to demonstrate the data generated by the data acquisition tools.
  • the static data plot (108-1) is a seismic two-way response time.
  • Static data plot (108-2) is core sample data measured from a core sample of the subterranean formation (104).
  • Static data plot (108-3) is a logging trace, referred to as a well log.
  • Production decline curve or graph (108-4) is a dynamic data plot of the fluid flow rate over time.
  • Other data may also be collected, such as historical data, analyst user inputs, economic information, and/or other measurement data and other parameters of interest.
  • each of the wellsite system A (114-1), wellsite system B (114-2), and wellsite system C (114-3) is associated with a rig, a wellbore, and other wellsite equipment configured to perform wellbore operations, such as logging, drilling, fracturing, production, or other applicable operations.
  • the wellsite system A (114-1) is associated with a rig (101), a wellbore (103), and drilling equipment to perform drilling operation.
  • the wellsite system B (114-2) and wellsite system C (114-3) are associated with respective rigs, wellbores, other wellsite equipments, such as production equipment and logging equipment to perform production operations and logging operations, respectively.
  • field operations of the field 100
  • data acquisition tools and wellsite equipments are referred to as field operation equipment.
  • the field operations are performed as directed by a surface unit (112).
  • the field operation equipment may be controlled by a field operation control signal that is sent from the surface unit (112).
  • the surface unit (112) is operatively coupled to the data acquisition tools (102-1), (102-2), (102-3), (102-4), and/or the wellsite systems.
  • the surface unit (112) is configured to send commands to the data acquisition tools (102-1), (102-2), (102-3), (102-4), and/or the wellsite systems and to receive data therefrom.
  • the surface unit (112) may be located at the wellsite system A (114-1), wellsite system B (114-2), wellsite system C (114-3), and/or remote locations.
  • the surface unit (112) may be provided with computer facilities (e.g., an E&P computer system (118)) for receiving, storing, processing, and/or analyzing data from the data acquisition tools (102- 1), (102-2), (102-3), (102-4), the wellsite system A (114- 1), wellsite system B (114-2), wellsite system C (114-3), and/or other parts of the field (100).
  • the surface unit (112) may also be provided with or have functionality for actuating mechanisms at the field (100). The surface unit (112) may then send command signals to the field (100) in response to data received, stored, processed, and/or analyzed, for example to control and/or optimize various field operations described above.
  • the surface unit (112) is communicatively coupled to the E&P computer system (118).
  • the data received by the surface unit (112) may be sent to the E&P computer system (118) for further analysis.
  • the E&P computer system (1 18) is configured to analyze, model, control, optimize, or perform management tasks of the aforementioned field operations based on the data provided from the surface unit (112).
  • the E&P computer system (118) is provided with functionality for manipulating and analyzing the data, such as analyzing well logs to determine electrofacies in the subterranean formation (104) or performing simulation, planning, and optimization of production operations of the wellsite system A (114-1), wellsite system B (114-2), and/or wellsite system C (114-3).
  • the result generated by the E&P computer system (118) may be displayed for an analyst user to view the result in a two dimensional (2D) display, three dimensional (3D) display, or other suitable displays.
  • the surface unit (112) is shown as separate from the E&P computer system (118) in FIG. 1.1, in other examples, the surface unit (112) and the E&P computer system (118) may also be combined.
  • FIG. 1.1 shows a field (100) on the land
  • the field (100) may be an offshore field.
  • the subterranean formation may be in the sea floor.
  • field data may be gathered from the field (100) that is an offshore field using a variety of offshore techniques for gathering field data.
  • FIG. 1.2 shows more details of the E&P computer system (118) in which one or more embodiments of visualizing datasets may be implemented.
  • one or more of the modules and elements shown in FIG. 1.2 may be omitted, repeated, and/or substituted. Accordingly, embodiments of visualizing datasets should not be considered limited to the specific arrangements of modules shown in FIG. 1.2.
  • the E&P computer system (118) includes an E&P tool (230), a data repository (238) for storing intermediate data and resultant outputs of the E&P tool (230), a display device (239) for displaying outputs of the E&P tool (230), and a field task engine (240) for performing various tasks of the field operation.
  • the display device (239) may be a two-dimensional (2D) display device or a three-dimensional (3D) display device based on liquid crystal display, cathode ray tube, plasma display, or other display technology.
  • the data repository (238) may include one or more disk drive storage devices, one or more semiconductor storage devices, other suitable computer data storage devices, or combinations thereof.
  • content stored in the data repository (238) may be stored as a data file, a linked list, a data sequence, a database, a graphical representation, any other suitable data structure, or combinations thereof.
  • the intermediate data and resultant outputs of the E&P tool (230) include the volume dataset (232), volume slice A (233), volume slice B (234), cursor position (235), panel size (236), and cross-cutting angle (237).
  • the volume dataset (232) includes a collection of data items of the field (100), where each data item is assigned to a point/position in a three dimensional (3D) volume and corresponds to a location in the field (100).
  • the data item is a measured, interpolated, extrapolated, or otherwise calculated value of a property (e.g., physical property, chemical property, etc.) at the position in the field.
  • each data item may include a seismic data item where the 3D volume corresponds to a region of the field (100).
  • the volume dataset (232) includes results generated by a data acquisition tool, such as the data acquisition tool (102-3) depicted in FIG. 1.1 above.
  • a volume slice (e.g., volume slice A (233), volume slice B (234)) is a portion of the volume dataset (232) defined by a cross- sectional plane of the 3D volume.
  • a volume slice is a cross section of the volume.
  • the cross-sectional plane may be a portion of a linear or curvi-linear surface.
  • the cursor position (235) is a point/position within the 3D volume that is specified by a user of the E&P computer system (118).
  • the cursor position may be specified by a pointing device (not shown), such as a mouse, finger on a touchpad, stylus, etc.
  • the cursor position may be a position of a displayed cursor as entered by a user when specifying coordinates.
  • the point/position pointed to by the cursor position (235) corresponds to a point of interest in viewing the volume dataset (232).
  • the panel size (236) and cross-cutting angle (237) are parameters specified by the user to control how the volume slice (e.g., volume slice A (233), volume slice B (234)) is displayed.
  • the E&P tool (230) includes the input receiver (221), the volume slice analyzer (222), and the rendering engine (223). Each of these components of the E&P tool (230) is described below.
  • the input receiver (221) is configured to obtain the volume dataset (232) and user inputs.
  • the input receiver (221) may include hardware, software, and/or graphical user interface widgets that are configured to receive input.
  • the input receiver (221) obtains the volume dataset (232) from the surface unit (112) depicted in FIG. 1.1 above.
  • the input receiver (221) is further configured to obtain one or more of the cursor position (235), panel size (236), and cross-cutting angle (237) as a user input of the E&P computer system (118).
  • the input receiver (221) may obtain the volume dataset (232) and/or user inputs intermittently, periodically, in response to a user activation, or as triggered by an event. Accordingly, the intermediate and final results of the volume slice analyzer (222) and the rendering engine (223) may be generated intermittently, periodically, in response to a user activation, or as triggered by an event.
  • the volume slice analyzer (222) is configured to generate, based on the cursor position (235), the volume slice A (233) of the volume dataset (232).
  • the volume slice A (233) corresponds to a cross-section of the volume of the volume dataset (232).
  • the cross-section may be defined by a plane passing through the cursor position (235) and oriented perpendicularly to a viewing direction specified by a user.
  • the volume slice A (233) includes the cross- section and data items assigned to points on the cross-section.
  • the volume slice A (233) includes a portion of the cross-section and data items assigned to points on the portion of the cross-section.
  • the volume slice analyzer (222) is configured to extract, from the volume dataset (232) and in response to obtaining the cursor position (235), the volume slice B (234) of the volume dataset (232).
  • the volume slice B (234) corresponds to another cross-section, or a portion of the another cross-section, of the volume of the volume dataset (232). In particular, the volume slice B (234) intersects the volume slice A (233) at the cursor position
  • the dimensions of the volume slice B (234) are defined based on the panel size (236).
  • the panel size (236) may include a height and width of the volume slice B (234).
  • the volume slice B (234) having the panel size (236) intersects the volume slice A (233) with an intersecting angle that is defined by the cross- cutting angle (237).
  • the volume slice B (234) is referred to as a cross-cutting panel to the volume slice A (233).
  • the rendering engine (223) is configured to generate a 2D or 3D display image based on the output of the volume slice analyzer (222).
  • the 2D or 3D display image is provided to the display device (239) and displayed to a user.
  • the display image is a composite image of the volume slice A (233) and volume slice B (234).
  • the composite image may show the cross-cutting panel blocking a portion of the volume slice A (233)with volume slice B (234).
  • the data items of the blocked portion of the volume slice A (233) are not displayed.
  • the cross-cutting panel is displayed at a location on the composite image a distance away from where the cursor position (235) points to.
  • the data items on the volume slice A (234) around the cursor position (235) are not obscured.
  • the data items on the volume slice A (234) within the distance from the cursor position (235) are displayed without being blocked by the cross-cutting panel. Accordingly, the user may view the data items around the cursor position (235) for both the volume slice A (233) and volume slice B (234) simultaneously. The distance separating the cursor position
  • cross-cutting panel may be pre-configured, specified by the user, or automatically adjusted based on the panel size and/or cross-cutting angle.
  • the E&P computer system (118) includes the field task engine (240) that is configured to generate a field operation control signal based at least on a result generated by the E&P tool (230), such as based on a user input in response to displaying the 2D or 3D image described above.
  • the field operation equipment depicted in FIG. 1.1 above may be controlled by the field operation control signal.
  • the field operation control signal may be used to control drilling equipment, an actuator, a fluid valve, or other electrical and/or mechanical devices disposed about the field (100) depicted in FIG. 1.1 above.
  • the field planning operation, drilling operation, production operation, etc. may be performed based on the 2D or 3D image described above.
  • the E&P computer system (118) may include one or more system computers, such as shown in FIG. 4 below, which may be implemented as a server or any conventional computing system.
  • system computers such as shown in FIG. 4 below, which may be implemented as a server or any conventional computing system.
  • HTTP hypertext transfer protocol
  • FIG. 2 depicts an example method in accordance with one or more embodiments.
  • the method depicted in FIG. 2 may be practiced using the E&P computer system (118) described in reference to FIGS. 1.1 and 1.2 above.
  • one or more of the elements shown in FIG. 2 may be omitted, repeated, and/or performed in a different order. Accordingly, embodiments of visualizing datasets should not be considered limited to the specific arrangements of elements shown in FIG. 2.
  • a volume slice of a volume dataset is displayed.
  • the volume slice is extracted from the volume dataset based on a pre-determined or user specified viewing direction.
  • measurement values for the volume dataset are obtained using data acquisition tools. Accordingly, the values are converted to a complete volume dataset.
  • a cursor position in the volume slice is obtained.
  • the cursor position corresponds to a point of interest of the user when viewing the volume dataset.
  • a cross- cutting angle and a panel size are also obtained.
  • a cross-cutting panel is extracted at the point of interest based on the cross-cutting angle and the panel size. For example, a cross-cutting volume slice may be identified first based on the volume slice and the cross- cutting angle. The cross-cutting panel may then be extracted from the cross- cutting volume slice based on the cursor position and the panel size.
  • the cross-cutting panel is displayed blocking a portion of the volume slice on the display.
  • the cross-cutting panel is displayed away from the point of interest to avoid obscuring the displayed volume slice around the point of interest.
  • the cross- cutting angle, the panel size, and the display location of the cross-cutting panel are adjusted by the user to achieve a desired view of the volume dataset in the vicinity of the point of interest.
  • the distance separating the cursor position and the cross-cutting panel may be pre-configured, specified by the user, or automatically adjusted based on the panel size and/or cross-cutting angle.
  • the cross-cutting panel and the volume slice are displayed as composite image on a 2D or 3D display.
  • Block 205 a determination is made as to whether a modified cursor position is received subsequent to obtaining the current cursor position. If the determination is positive, i.e., the cursor position is modified, the method returns to Block 202 and the displayed cross-cutting panel is adjusted accordingly. If the determination is negative, i.e., the cursor position is not modified, the method proceeds to Block 206.
  • Block 206 in response to displaying the cross-cutting panel blocking a portion of the volume slice away from the point of interest, an input is received from a user.
  • a field operation control signal is generated based on the input.
  • the field operation control signal is generated based on the user input in response to displaying the 2D or 3D image described above.
  • a field operation is performed based on the control signal.
  • field operation equipment may be controlled by the field operation control signal.
  • the field operation control signal may be used to control drilling equipment, an actuator, a fluid valve, or other electrical and/or mechanical devices disposed about the field.
  • the field planning operation, drilling operation, production operation, etc. may be performed based on the 2D or 3D image described above.
  • FIGS. 3.1-3.14 describe an example for visualizing data.
  • the data is a volumetric dataset.
  • the datasets are visualized and interpreted by software.
  • a volume dataset may be visualized using intersection planes. An intersection plane may be used to visualize the data through which the plane runs.
  • the intersection plane with the associated data may be referred to as the volume slice.
  • the volume dataset may include seismic data that indirectly describe subterranean geological structure. Seismic data may also include measurement noises and may be interpreted by geoscientists to derive the geological structure information.
  • the geoscientist may manually picks lines on a volume slice using a pointing device, such as a computer mouse. Multiple volume slices are picked, and the picked lines collectively define sub-surfaces.
  • a pointing device such as a computer mouse.
  • Multiple volume slices are picked, and the picked lines collectively define sub-surfaces.
  • One or more embodiments create simultaneous visualizations of data, such as seismic data, in two different directions at a location. By creating simultaneous visualizations, one or more embodiments may facilitate improved accuracy and efficiency of volume visualization, such as the seismic interpretation.
  • the volumetric data includes a 3D dataset of elements called "voxels.”
  • voxels may be uniformly distributed throughout the volume, such as the volume (310) shown in FIG. 3.1. However, some embodiments may not have uniform distribution.
  • a volume may include at least three axes (i, j, k) that are orthogonal to each other and that may define a volume coordinate system. Each of the individual voxels has a distinct position in the volume. The position may be associated with a coordinate location (i, j, k). In many embodiments, each of i, j, k is a non-negative integer.
  • the individual voxels may also have various associated attributes such as, for example, color, illumination, opacity, velocity, amplitude, etc.
  • the attributes of the voxels may vary in different areas (e.g., CAT scans in medicine, confocal microscopy, and seismic data and its derivatives in geoscience).
  • the volume (310) shown in FIG. 3.1 has an i coordinate ranging from 0 to
  • Typical volumes may include many more voxels, such as thousands or millions.
  • One approach to visualizing a volume is to display orthogonal slices.
  • FIG. 3.5 illustrates a screenshot A (351) of a seismic volume and screenshot B (352), screenshot C (353), and screenshot D (354) of the seismic volume's slices along 3 orthogonal directions.
  • a volume slice may be defined as a linear surface in any direction, not necessarily perpendicular to any of the volume's three orthogonal axes. While the examples are given in connection with an orthogonal slice, the approach can be used on any volume slice.
  • the volume may have a curvilinear coordinate system and the volume slice may be defined as a curved surface.
  • FIG. 3.7 illustrates a small rectangular panel used to perpendicularly cut into the volume slice at the point. The rectangular panel may be referred to as a cross-cutting panel (371).
  • the size of the cross-cutting panel (371) may have a default value.
  • the size may be customizable.
  • FIG. 3.8 illustrates the cross-cutting display (381) that displays the data extracted from the cross-cutting panel (371).
  • the cross-cutting display tracks the cursor to its new location by repeating the process at a new cursor location.
  • FIG. 3.9 illustrates that, in certain embodiments, the angle (i.e., cross-cutting angle (391)) at which the cross-cutting panel intersects with the volume slice may be varied.
  • FIG. 3.10 illustrates a scenario in which the cross-cutting panel (311) intersects with the volume slice (312) in a 90° angle
  • FIG. 3.11 illustrating an example on a seismic volume (313) where grey scale shading represents data extracted from the volume slice (312) except within the cross-cutting display (314).
  • the grey scale shading within the cross-cutting display (314) represents data extracted from the cross-cutting panel (311).
  • the cross-cutting angle may, in certain embodiments, be continuously varied at a given cursor location to inspect its surroundings at all angles.
  • FIG. 3.12 illustrates a scenario in which the cross-cutting panel (321) intersects with the volume slice (322) in a 45° angle, with FIG. 3.13 illustrating an example on a seismic volume (323).
  • the 45° cross-cutting angle may, in certain embodiments, be displayed to the user using an angle indicator (324), which is a short line segment oriented at 45° with respect to the cross-cutting display (325).
  • the angle indicator (324) rotates accordingly as the user inspects data surrounding the cursor location at all angles.
  • Cross-cutting cursor tracking may also be used to display different types of volume attributes.
  • real time computed attribute data may be displayed on the cross-cutting panel.
  • FIG. 3.14 illustrates a seismic volume slice (341) with cross-cutting display (342) of its instantaneous frequency attribute.
  • a cross-cutting panel user interface may provide a user with options to configure the cross-cutting panel.
  • the user interface may allow the user to vary the location of the cursor position that identifies a point-of-interest in the volume.
  • the user interface may also allow the user to change the cross-cutting angle or otherwise rotate the cross-cutting plane associating with the cross-cutting panel.
  • Embodiments may be implemented on a computing system. Any combination of mobile, desktop, server, router, switch, embedded device, or other types of hardware may be used.
  • the computing system (400) may include one or more computer processors (402), non-persistent storage (404) (e.g., volatile memory, such as random access memory (RAM), cache memory), persistent storage (406) (e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.), a communication interface (412) (e.g., Bluetooth interface, infrared interface, network interface, optical interface, etc.), and numerous other elements and functionalities.
  • non-persistent storage e.g., volatile memory, such as random access memory (RAM), cache memory
  • persistent storage e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.
  • a communication interface (412)
  • the computer processor(s) (402) may be an integrated circuit for processing instructions.
  • the computer processor(s) may be one or more cores or micro-cores of a processor.
  • the computing system (400) may also include one or more input devices (410), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.
  • the communication interface (412) may include an integrated circuit for connecting the computing system (400) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.
  • a network not shown
  • LAN local area network
  • WAN wide area network
  • the Internet such as the Internet
  • mobile network such as another computing device.
  • the computing system (400) may include one or more output devices (408), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device.
  • a screen e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device
  • One or more of the output devices may be the same or different from the input device(s).
  • the input and output device(s) may be locally or remotely connected to the computer processor(s) (402), non-persistent storage (404), and persistent storage (406).
  • the computer processor(s) (402), non-persistent storage (404), and persistent storage (406).
  • Software instructions in the form of computer readable program code to perform embodiments of the invention may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a CD, DVD, storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium.
  • the software instructions may correspond to computer readable program code that, when executed by a processor(s), is configured to perform one or more embodiments of the invention.
  • the computing system (400) in FIG. 4.1 may be connected to or be a part of a network.
  • the network (420) may include multiple nodes (e.g., node X (422), node Y (424)).
  • Each node may correspond to a computing system, such as the computing system shown in FIG. 4.1, or a group of nodes combined may correspond to the computing system shown in FIG. 4.1.
  • embodiments of the invention may be implemented on a node of a distributed system that is connected to other nodes.
  • embodiments of the invention may be implemented on a distributed computing system having multiple nodes, where each portion of the invention may be located on a different node within the distributed computing system. Further, one or more elements of the aforementioned computing system (400) may be located at a remote location and connected to the other elements over a network.
  • the node may correspond to a blade in a server chassis that is connected to other nodes via a backplane.
  • the node may correspond to a server in a data center.
  • the node may correspond to a computer processor or micro- core of a computer processor with shared memory and/or resources.
  • the nodes e.g., node X (422), node Y (424)
  • the nodes may be configured to provide services for a client device (426).
  • the nodes may be part of a cloud computing system.
  • the nodes may include functionality to receive requests from the client device (426) and transmit responses to the client device (426).
  • the client device (426) may be a computing system, such as the computing system shown in FIG. 4.1. Further, the client device (426) may include and/or perform all or a portion of one or more embodiments of the invention.
  • 4.1 and 4.2 may include functionality to perform a variety of operations disclosed herein.
  • the computing system(s) may perform communication between processes on the same or different system.
  • a variety of mechanisms, employing some form of active or passive communication, may facilitate the exchange of data between processes on the same device. Examples representative of these inter-process communications include, but are not limited to, the implementation of a file, a signal, a socket, a message queue, a pipeline, a semaphore, shared memory, message passing, and a memory-mapped file. Further details pertaining to a couple of these non-limiting examples are provided below.
  • sockets may serve as interfaces or communication channel end-points enabling bidirectional data transfer between processes on the same device.
  • a server process e.g., a process that provides data
  • the server process may create a first socket object.
  • the server process binds the first socket object, thereby associating the first socket object with a unique name and/or address.
  • the server process then waits and listens for incoming connection requests from one or more client processes (e.g., processes that seek data).
  • client processes e.g., processes that seek data.
  • the client process then proceeds to generate a connection request that includes at least the second socket object and the unique name and/or address associated with the first socket object.
  • the client process then transmits the connection request to the server process.
  • the server process may accept the connection request, establishing a communication channel with the client process, or the server process, busy in handling other operations, may queue the connection request in a buffer until server process is ready.
  • An established connection informs the client process that communications may commence.
  • the client process may generate a data request specifying the data that the client process wishes to obtain.
  • the data request is subsequently transmitted to the server process.
  • the server process analyzes the request and gathers the requested data.
  • shared memory refers to the allocation of virtual memory space in order to substantiate a mechanism for which data may be communicated and/or accessed by multiple processes.
  • an initializing process first creates a shareable segment in persistent or non-persistent storage. Post creation, the initializing process then mounts the shareable segment, subsequently mapping the shareable segment into the address space associated with the initializing process. Following the mounting, the initializing process proceeds to identify and grant access permission to one or more authorized processes that may also write and read data to and from the shareable segment.
  • Changes made to the data in the shareable segment by one process may immediately affect other processes, which are also linked to the shareable segment. Further, when one of the authorized processes accesses the shareable segment, the shareable segment maps to the address space of that authorized process. Often, only one authorized process may mount the shareable segment, other than the initializing process, at any given time.
  • the computing system performing one or more embodiments of the invention may include functionality to receive data from a user.
  • a user may submit data via a graphical user interface (GUI) on the user device.
  • GUI graphical user interface
  • Data may be submitted via the graphical user interface by a user selecting one or more graphical user interface widgets or inserting text and other data into graphical user interface widgets using a touchpad, a keyboard, a mouse, or any other input device.
  • information regarding the particular item may be obtained from persistent or non-persistent storage by the computer processor.
  • the contents of the obtained data regarding the particular item may be displayed on the user device in response to the user's selection.
  • a request to obtain data regarding the particular item may be sent to a server operatively connected to the user device through a network.
  • the user may select a uniform resource locator (URL) link within a web client of the user device, thereby initiating a Hypertext Transfer Protocol (HTTP) or other protocol request being sent to the network host associated with the URL.
  • HTTP Hypertext Transfer Protocol
  • the server may extract the data regarding the particular selected item and send the data to the device that initiated the request.
  • the contents of the received data regarding the particular item may be displayed on the user device in response to the user's selection.
  • the data received from the server after selecting the URL link may provide a web page in Hyper Text Markup Language (HTML) that may be rendered by the web client and displayed on the user device.
  • HTML Hyper Text Markup Language
  • the computing system may extract one or more data items from the obtained data.
  • the extraction may be performed as follows by the computing system in FIG. 4.1.
  • the organizing pattern e.g., grammar, schema, layout
  • the organizing pattern is determined, which may be based on one or more of the following: position (e.g., bit or column position, Nth token in a data stream, etc.), attribute (where the attribute is associated with one or more values), or a hierarchical/tree structure (having layers of nodes at different levels of detail— such as in nested packet headers or nested document sections).
  • the raw, unprocessed stream of data symbols is parsed, in the context of the organizing pattern, into a stream (or layered structure) of tokens (where each token may have an associated token "type").
  • extraction criteria are used to extract one or more data items from the token stream or structure, where the extraction criteria are processed according to the organizing pattern to extract one or more tokens (or nodes from a layered structure).
  • the token(s) at the position(s) identified by the extraction criteria are extracted.
  • the token(s) and/or node(s) associated with the attribute(s) satisfying the extraction criteria are extracted.
  • the token(s) associated with the node(s) matching the extraction criteria are extracted.
  • the extraction criteria may be as simple as an identifier string or may be a query presented to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML).
  • the extracted data may be used for further processing by the computing system.
  • the computing system of FIG. 4.1 while performing one or more embodiments of the invention, may perform data comparison.
  • the comparison may be performed by submitting A, B, and an opcode specifying an operation related to the comparison into an arithmetic logic unit (ALU) (i.e., circuitry that performs arithmetic and/or bitwise logical operations on the two data values).
  • ALU arithmetic logic unit
  • the ALU outputs the numerical result of the operation and/or one or more status flags related to the numerical result.
  • the status flags may indicate whether the numerical result is a positive number, a negative number, zero, etc.
  • the comparison may be executed. For example, in order to determine if A > B, B may be subtracted from A (i.e., A - B), and the status flags may be read to determine if the result is positive (i.e., if A > B, then A - B > 0).
  • a and B may be vectors, and comparing A with B involves comparing the first element of vector A with the first element of vector B, the second element of vector A with the second element of vector B, etc.
  • comparing A with B involves comparing the first element of vector A with the first element of vector B, the second element of vector A with the second element of vector B, etc.
  • if A and B are strings, the binary values of the strings may be compared.
  • the computing system in FIG. 4.1 may implement and/or be connected to a data repository.
  • a data repository is a database.
  • a database is a collection of information configured for ease of data retrieval, modification, re-organization, and deletion.
  • Database Management System is a software application that provides an interface for users to define, create, query, update, or administer databases.
  • the user, or software application may submit a statement or query into the
  • the DBMS interprets the statement.
  • the statement may be a select statement to request information, update statement, create statement, delete statement, etc.
  • the statement may include parameters that specify data, or data container (database, table, record, column, view, etc.), identifier(s), conditions (comparison operators), functions (e.g. join, full join, count, average, etc.), sort (e.g. ascending, descending), or others.
  • the DBMS may execute the statement.
  • the DBMS may access a memory buffer, a reference or index of a file for read, write, deletion, or any combination thereof, for responding to the statement.
  • the DBMS may load the data from persistent or non-persistent storage and perform computations to respond to the query.
  • the DBMS may return the result(s) to the user or software application.
  • the computing system of FIG. 4.1 may include functionality to present raw and/or processed data, such as results of comparisons and other processing.
  • presenting data may be accomplished through various presenting methods.
  • data may be presented through a user interface provided by a computing device.
  • the user interface may include a GUI that displays information on a display device, such as a computer monitor or a touchscreen on a handheld computer device.
  • the GUI may include various GUI widgets that organize what data is shown as well as how data is presented to a user.
  • the GUI may present data directly to the user, e.g., data presented as actual data values through text, or rendered by the computing device into a visual representation of the data, such as through visualizing a data model.
  • a GUI may first obtain a notification from a software application requesting that a particular data object be presented within the GUI.
  • the GUI may determine a data object type associated with the particular data object, e.g., by obtaining data from a data attribute within the data object that identifies the data object type.
  • the GUI may determine any rules designated for displaying that data object type, e.g., rules specified by a software framework for a data object class or according to any local parameters defined by the GUI for presenting that data object type.
  • the GUI may obtain data values from the particular data object and render a visual representation of the data values within a display device according to the designated rules for that data object type.
  • Data may also be presented through various audio methods.
  • data may be rendered into an audio format and presented as sound through one or more speakers operably connected to a computing device.
  • Data may also be presented to a user through haptic methods.
  • haptic methods may include vibrations or other physical signals generated by the computing system.
  • data may be presented to a user using a vibration generated by a handheld computer device with a predefined duration and intensity of the vibration to communicate the data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for performing a field operation of a field includes obtaining a volume dataset of the field, obtaining a cursor position in a first volume slice of the volume dataset, and extracting, from the volume dataset and in response to obtaining the cursor position, a second volume slice of the volume dataset. The second volume slice intersects the first volume slice at the cursor position. The method further includes displaying the first volume slice and the second volume slice on a display, the second volume slice blocking, in the display, a portion of the first volume slice a distance away from the cursor position.

Description

VISUALIZING DATASETS
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority under 35 U.S.C. § 119(e) from Provisional
Patent Application No. 62/119,597 filed February 23, 2015, entitled "APPROACHES TO VISUALIZING DATASETS," which is hereby incorporated by reference in its entirety.
BACKGROUND
[0002] Oil and gas industry is using increasingly sophisticated data acquisition techniques to produce more and more volumetric datasets. These datasets are visualized and interpreted using various software applications.
SUMMARY
[0003] In general, in one aspect, embodiments of visualizing datasets relate to a method for performing a field operation of a field. The method includes obtaining a volume dataset including data items of the field, where each of the data items is assigned to a position in a three dimensional (3D) volume and corresponds to a location in the field. The method further includes obtaining a cursor position in a first volume slice of the volume dataset, and extracting, from the volume dataset and in response to obtaining the cursor position, a second volume slice of the volume dataset. The first volume slice corresponds to at least a first portion of a first cross-section of the volume, and the second volume slice corresponds to a second portion of a second cross-section of the volume. The second volume slice intersects the first volume slice at the cursor position. The method further includes displaying the first volume slice on a display, and displaying the second volume slice on the display, the second volume slice blocking, in the display, a third portion of the first volume slice a distance away from the cursor position.
[0004] Other aspects will be apparent from the following description and the appended claims.
BRIEF DESCRIPTION OF DRAWINGS
[0005] The appended drawings illustrate several embodiments of visualizing datasets and are not to be considered limiting of its scope, for visualizing datasets may admit to other equally effective embodiments.
[0006] FIG. 1.1 is a schematic view, partially in cross-section, of a field in which one or more embodiments of visualizing datasets may be implemented.
[0007] FIG. 1.2 shows a schematic diagram of a system in accordance with one or more embodiments.
[0008] FIG. 2 shows a flowchart in accordance with one or more embodiments.
[0009] FIGS. 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, and
3.14 show an example in accordance with one or more embodiments.
[0010] FIGS. 4.1 and 4.2 show systems in accordance with one or more embodiments.
DETAILED DESCRIPTION
[0011] Specific embodiments will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
[0012] In the following detailed description of embodiments, numerous specific details are set forth in order to provide a more thorough understanding. However, it will be apparent to one of ordinary skill in the art that one or more embodiments may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
[0013] In general, embodiments of the present disclosure provide methods, computing systems, and computer-readable media for visualizing a dataset for a volume. In one embodiment, a cross-cutting panel is provided that allows simultaneous visualization of data in multiple directions at a point in the volume. The approach may allow for improved understanding of a volume and more efficient interpretation of the volume.
[0014] FIG. 1.1 depicts a schematic view, partially in cross section, of a field (100) in which one or more embodiments of visualizing datasets may be implemented. In one or more embodiments, one or more of the modules and elements shown in FIG. 1.1 may be omitted, repeated, and/or substituted. Accordingly, embodiments of visualizing datasets should not be considered limited to the specific arrangements of modules shown in FIG. 1.1.
[0015] As shown in FIG. 1.1, the field (100) includes the subterranean formation
(104), data acquisition tools (102-1), (102-2), (102-3), and (102-4), wellsite system A (114-1), wellsite system B (114-2), wellsite system C (114-3), a surface unit (112), and an exploration and production (E&P) computer system (118). The subterranean formation (104) includes several geological structures, such as a sandstone layer (106-1), a limestone layer (106-2), a shale layer (106- 3), a sand layer (106-4), and a fault line (107). In one or more embodiments, data acquisition tools (102-1), (102-2), (102-3), and (102-4) are positioned at various locations along the field (100) for collecting data of the subterranean formation (104), referred to as survey operations. In particular, the data acquisition tools are adapted to measure the subterranean formation (104) and detect the characteristics of the geological structures of the subterranean formation (104). For example, data plots (108-1), (108-2), (108-3), and (108-4) are depicted along the field (100) to demonstrate the data generated by the data acquisition tools. Specifically, the static data plot (108-1) is a seismic two-way response time. Static data plot (108-2) is core sample data measured from a core sample of the subterranean formation (104). Static data plot (108-3) is a logging trace, referred to as a well log. Production decline curve or graph (108-4) is a dynamic data plot of the fluid flow rate over time. Other data may also be collected, such as historical data, analyst user inputs, economic information, and/or other measurement data and other parameters of interest.
[0016] Further, as shown in FIG. 1.1, each of the wellsite system A (114-1), wellsite system B (114-2), and wellsite system C (114-3) is associated with a rig, a wellbore, and other wellsite equipment configured to perform wellbore operations, such as logging, drilling, fracturing, production, or other applicable operations. For example, the wellsite system A (114-1) is associated with a rig (101), a wellbore (103), and drilling equipment to perform drilling operation. Similarly, the wellsite system B (114-2) and wellsite system C (114-3) are associated with respective rigs, wellbores, other wellsite equipments, such as production equipment and logging equipment to perform production operations and logging operations, respectively. Generally, survey operations and wellbore operations are referred to as field operations of the field (100). In addition, data acquisition tools and wellsite equipments are referred to as field operation equipment. The field operations are performed as directed by a surface unit (112). For example, the field operation equipment may be controlled by a field operation control signal that is sent from the surface unit (112).
[0017] In one or more embodiments, the surface unit (112) is operatively coupled to the data acquisition tools (102-1), (102-2), (102-3), (102-4), and/or the wellsite systems. In particular, the surface unit (112) is configured to send commands to the data acquisition tools (102-1), (102-2), (102-3), (102-4), and/or the wellsite systems and to receive data therefrom. In one or more embodiments, the surface unit (112) may be located at the wellsite system A (114-1), wellsite system B (114-2), wellsite system C (114-3), and/or remote locations. The surface unit (112) may be provided with computer facilities (e.g., an E&P computer system (118)) for receiving, storing, processing, and/or analyzing data from the data acquisition tools (102- 1), (102-2), (102-3), (102-4), the wellsite system A (114- 1), wellsite system B (114-2), wellsite system C (114-3), and/or other parts of the field (100). The surface unit (112) may also be provided with or have functionality for actuating mechanisms at the field (100). The surface unit (112) may then send command signals to the field (100) in response to data received, stored, processed, and/or analyzed, for example to control and/or optimize various field operations described above. In one or more embodiments, the surface unit (112) is communicatively coupled to the E&P computer system (118). In one or more embodiments, the data received by the surface unit (112) may be sent to the E&P computer system (118) for further analysis. Generally, the E&P computer system (1 18) is configured to analyze, model, control, optimize, or perform management tasks of the aforementioned field operations based on the data provided from the surface unit (112). In one or more embodiments, the E&P computer system (118) is provided with functionality for manipulating and analyzing the data, such as analyzing well logs to determine electrofacies in the subterranean formation (104) or performing simulation, planning, and optimization of production operations of the wellsite system A (114-1), wellsite system B (114-2), and/or wellsite system C (114-3). In one or more embodiments, the result generated by the E&P computer system (118) may be displayed for an analyst user to view the result in a two dimensional (2D) display, three dimensional (3D) display, or other suitable displays. Although the surface unit (112) is shown as separate from the E&P computer system (118) in FIG. 1.1, in other examples, the surface unit (112) and the E&P computer system (118) may also be combined.
[0019] Although FIG. 1.1 shows a field (100) on the land, the field (100) may be an offshore field. In such a scenario, the subterranean formation may be in the sea floor. Further, field data may be gathered from the field (100) that is an offshore field using a variety of offshore techniques for gathering field data.
[0020] FIG. 1.2 shows more details of the E&P computer system (118) in which one or more embodiments of visualizing datasets may be implemented. In one or more embodiments, one or more of the modules and elements shown in FIG. 1.2 may be omitted, repeated, and/or substituted. Accordingly, embodiments of visualizing datasets should not be considered limited to the specific arrangements of modules shown in FIG. 1.2.
[0021] As shown in FIG. 1.2, the E&P computer system (118) includes an E&P tool (230), a data repository (238) for storing intermediate data and resultant outputs of the E&P tool (230), a display device (239) for displaying outputs of the E&P tool (230), and a field task engine (240) for performing various tasks of the field operation. In one or more embodiments, the display device (239) may be a two-dimensional (2D) display device or a three-dimensional (3D) display device based on liquid crystal display, cathode ray tube, plasma display, or other display technology. In one or more embodiments, the data repository (238) may include one or more disk drive storage devices, one or more semiconductor storage devices, other suitable computer data storage devices, or combinations thereof. In one or more embodiments, content stored in the data repository (238) may be stored as a data file, a linked list, a data sequence, a database, a graphical representation, any other suitable data structure, or combinations thereof.
[0022] In one or more embodiments, the intermediate data and resultant outputs of the E&P tool (230) include the volume dataset (232), volume slice A (233), volume slice B (234), cursor position (235), panel size (236), and cross-cutting angle (237). The volume dataset (232) includes a collection of data items of the field (100), where each data item is assigned to a point/position in a three dimensional (3D) volume and corresponds to a location in the field (100). The data item is a measured, interpolated, extrapolated, or otherwise calculated value of a property (e.g., physical property, chemical property, etc.) at the position in the field. For example, each data item may include a seismic data item where the 3D volume corresponds to a region of the field (100). In one or more embodiments, the volume dataset (232) includes results generated by a data acquisition tool, such as the data acquisition tool (102-3) depicted in FIG. 1.1 above.
[0023] In one or more embodiments, a volume slice (e.g., volume slice A (233), volume slice B (234)) is a portion of the volume dataset (232) defined by a cross- sectional plane of the 3D volume. In other words, a volume slice is a cross section of the volume. The cross-sectional plane may be a portion of a linear or curvi-linear surface.
[0024] The cursor position (235) is a point/position within the 3D volume that is specified by a user of the E&P computer system (118). For example, the cursor position may be specified by a pointing device (not shown), such as a mouse, finger on a touchpad, stylus, etc. By way of another example, the cursor position may be a position of a displayed cursor as entered by a user when specifying coordinates. The point/position pointed to by the cursor position (235) corresponds to a point of interest in viewing the volume dataset (232). In addition, the panel size (236) and cross-cutting angle (237) are parameters specified by the user to control how the volume slice (e.g., volume slice A (233), volume slice B (234)) is displayed. An example of the volume dataset (232), volume slice A (233), and volume slice B (234) is described in reference to FIGS. 3.1-3.5 below. [0025] In one or more embodiments, the E&P tool (230) includes the input receiver (221), the volume slice analyzer (222), and the rendering engine (223). Each of these components of the E&P tool (230) is described below.
[0026] In one or more embodiments, the input receiver (221) is configured to obtain the volume dataset (232) and user inputs. For example, the input receiver (221) may include hardware, software, and/or graphical user interface widgets that are configured to receive input. In one or more embodiments, the input receiver (221) obtains the volume dataset (232) from the surface unit (112) depicted in FIG. 1.1 above. In one or more embodiments, the input receiver (221) is further configured to obtain one or more of the cursor position (235), panel size (236), and cross-cutting angle (237) as a user input of the E&P computer system (118). For example, the input receiver (221) may obtain the volume dataset (232) and/or user inputs intermittently, periodically, in response to a user activation, or as triggered by an event. Accordingly, the intermediate and final results of the volume slice analyzer (222) and the rendering engine (223) may be generated intermittently, periodically, in response to a user activation, or as triggered by an event.
[0027] In one or more embodiments, the volume slice analyzer (222) is configured to generate, based on the cursor position (235), the volume slice A (233) of the volume dataset (232). In one or more embodiments, the volume slice A (233) corresponds to a cross-section of the volume of the volume dataset (232). For example, the cross-section may be defined by a plane passing through the cursor position (235) and oriented perpendicularly to a viewing direction specified by a user. In one or more embodiments, the volume slice A (233) includes the cross- section and data items assigned to points on the cross-section. In one or more embodiments, the volume slice A (233) includes a portion of the cross-section and data items assigned to points on the portion of the cross-section. [0028] In one or more embodiments, the volume slice analyzer (222) is configured to extract, from the volume dataset (232) and in response to obtaining the cursor position (235), the volume slice B (234) of the volume dataset (232). The volume slice B (234) corresponds to another cross-section, or a portion of the another cross-section, of the volume of the volume dataset (232). In particular, the volume slice B (234) intersects the volume slice A (233) at the cursor position
(235) . In one or more embodiments, the dimensions of the volume slice B (234) are defined based on the panel size (236). For example, the panel size (236) may include a height and width of the volume slice B (234). In one or more embodiments, the volume slice B (234) having the panel size (236) intersects the volume slice A (233) with an intersecting angle that is defined by the cross- cutting angle (237). In this context, the volume slice B (234) is referred to as a cross-cutting panel to the volume slice A (233).
[0029] An example of generating the cross-cutting panel based on the panel size
(236) and cross-cutting angle (237) is described in reference to FIGS. 3.6-3.14 below.
[0030] In one or more embodiments, the rendering engine (223) is configured to generate a 2D or 3D display image based on the output of the volume slice analyzer (222). The 2D or 3D display image is provided to the display device (239) and displayed to a user. In one or more embodiments, the display image is a composite image of the volume slice A (233) and volume slice B (234). For example, the composite image may show the cross-cutting panel blocking a portion of the volume slice A (233)with volume slice B (234). As a result, the data items of the blocked portion of the volume slice A (233) are not displayed. In particular, the cross-cutting panel is displayed at a location on the composite image a distance away from where the cursor position (235) points to. In this manner, the data items on the volume slice A (234) around the cursor position (235) are not obscured. In other words, the data items on the volume slice A (234) within the distance from the cursor position (235) are displayed without being blocked by the cross-cutting panel. Accordingly, the user may view the data items around the cursor position (235) for both the volume slice A (233) and volume slice B (234) simultaneously. The distance separating the cursor position
(235) and the cross-cutting panel may be pre-configured, specified by the user, or automatically adjusted based on the panel size and/or cross-cutting angle.
[0031] In one or more embodiments, the E&P computer system (118) includes the field task engine (240) that is configured to generate a field operation control signal based at least on a result generated by the E&P tool (230), such as based on a user input in response to displaying the 2D or 3D image described above. As noted above, the field operation equipment depicted in FIG. 1.1 above may be controlled by the field operation control signal. For example, the field operation control signal may be used to control drilling equipment, an actuator, a fluid valve, or other electrical and/or mechanical devices disposed about the field (100) depicted in FIG. 1.1 above. In particular, the field planning operation, drilling operation, production operation, etc., may be performed based on the 2D or 3D image described above.
[0032] The E&P computer system (118) may include one or more system computers, such as shown in FIG. 4 below, which may be implemented as a server or any conventional computing system. However, those skilled in the art, having benefit of this disclosure, will appreciate that implementations of various technologies described herein may be practiced in other computer system configurations, including hypertext transfer protocol (HTTP) servers, hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network personal computers, minicomputers, mainframe computers, and the like. [0033] While specific components are depicted and/or described for use in the units and/or modules of the E&P computer system (118) and the E&P tool (230), a variety of components with various functions may be used to provide the formatting, processing, utility and coordination functions for the E&P computer system (118) and the E&P tool (230). The components may have combined functionalities and may be implemented as software, hardware, firmware, or combinations thereof.
[0034] FIG. 2 depicts an example method in accordance with one or more embodiments. For example, the method depicted in FIG. 2 may be practiced using the E&P computer system (118) described in reference to FIGS. 1.1 and 1.2 above. In one or more embodiments, one or more of the elements shown in FIG. 2 may be omitted, repeated, and/or performed in a different order. Accordingly, embodiments of visualizing datasets should not be considered limited to the specific arrangements of elements shown in FIG. 2.
[0035] In Block 201, a volume slice of a volume dataset is displayed. In one or more embodiments, the volume slice is extracted from the volume dataset based on a pre-determined or user specified viewing direction. In one or more embodiments of the invention, measurement values for the volume dataset are obtained using data acquisition tools. Accordingly, the values are converted to a complete volume dataset.
[0036] In Block 202, a cursor position in the volume slice is obtained. In one or more embodiments, the cursor position corresponds to a point of interest of the user when viewing the volume dataset. In one or more embodiments, a cross- cutting angle and a panel size are also obtained.
[0037] In Block 203, a cross-cutting panel is extracted at the point of interest based on the cross-cutting angle and the panel size. For example, a cross-cutting volume slice may be identified first based on the volume slice and the cross- cutting angle. The cross-cutting panel may then be extracted from the cross- cutting volume slice based on the cursor position and the panel size.
[0038] In Block 204, the cross-cutting panel is displayed blocking a portion of the volume slice on the display. In one or more embodiments, the cross-cutting panel is displayed away from the point of interest to avoid obscuring the displayed volume slice around the point of interest. In one or more embodiments, the cross- cutting angle, the panel size, and the display location of the cross-cutting panel are adjusted by the user to achieve a desired view of the volume dataset in the vicinity of the point of interest. In one or more embodiments, the distance separating the cursor position and the cross-cutting panel may be pre-configured, specified by the user, or automatically adjusted based on the panel size and/or cross-cutting angle. In one or more embodiments, the cross-cutting panel and the volume slice are displayed as composite image on a 2D or 3D display.
[0039] In Block 205, a determination is made as to whether a modified cursor position is received subsequent to obtaining the current cursor position. If the determination is positive, i.e., the cursor position is modified, the method returns to Block 202 and the displayed cross-cutting panel is adjusted accordingly. If the determination is negative, i.e., the cursor position is not modified, the method proceeds to Block 206.
[0040] In Block 206, in response to displaying the cross-cutting panel blocking a portion of the volume slice away from the point of interest, an input is received from a user. In one or more embodiments, a field operation control signal is generated based on the input. In one or more embodiments, the field operation control signal is generated based on the user input in response to displaying the 2D or 3D image described above.
[0041] In Block 207, a field operation is performed based on the control signal. In one or more embodiments, field operation equipment may be controlled by the field operation control signal. For example, the field operation control signal may be used to control drilling equipment, an actuator, a fluid valve, or other electrical and/or mechanical devices disposed about the field. In particular, the field planning operation, drilling operation, production operation, etc., may be performed based on the 2D or 3D image described above.
[0042] FIGS. 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, and
3.14 show an example in accordance with one or more embodiments. In one or more embodiments, the example shown in these figures may be practiced using the E&P computer system shown in FIGS. 1.1 and 1.2 and the method described in reference to FIG. 2 above. The following example is for example purposes and not intended to limit the scope of the claims.
[0043] In general, FIGS. 3.1-3.14 describe an example for visualizing data. In one embodiment, the data is a volumetric dataset. In certain situations, as in the oil and gas industry, people are using increasingly sophisticated data acquisition techniques to produce datasets such as volume datasets. The datasets are visualized and interpreted by software. In many situations, a volume dataset may be visualized using intersection planes. An intersection plane may be used to visualize the data through which the plane runs. The intersection plane with the associated data may be referred to as the volume slice. For example, the volume dataset may include seismic data that indirectly describe subterranean geological structure. Seismic data may also include measurement noises and may be interpreted by geoscientists to derive the geological structure information. During interpretation of the seismic data (i.e., seismic interpretation), the geoscientist may manually picks lines on a volume slice using a pointing device, such as a computer mouse. Multiple volume slices are picked, and the picked lines collectively define sub-surfaces. [0044] One or more embodiments create simultaneous visualizations of data, such as seismic data, in two different directions at a location. By creating simultaneous visualizations, one or more embodiments may facilitate improved accuracy and efficiency of volume visualization, such as the seismic interpretation.
[0045] In one embodiment, the volumetric data includes a 3D dataset of elements called "voxels." One representation of such is shown in FIG. 3.1. The voxels may be uniformly distributed throughout the volume, such as the volume (310) shown in FIG. 3.1. However, some embodiments may not have uniform distribution. A volume may include at least three axes (i, j, k) that are orthogonal to each other and that may define a volume coordinate system. Each of the individual voxels has a distinct position in the volume. The position may be associated with a coordinate location (i, j, k). In many embodiments, each of i, j, k is a non-negative integer. The individual voxels may also have various associated attributes such as, for example, color, illumination, opacity, velocity, amplitude, etc. The attributes of the voxels may vary in different areas (e.g., CAT scans in medicine, confocal microscopy, and seismic data and its derivatives in geoscience).
[0046] The volume (310) shown in FIG. 3.1 has an i coordinate ranging from 0 to
4, j coordinate ranging from 0 to 6, and a k coordinate ranging from 0 to 4. In such an embodiment, there are 5x7x5=175 voxels in the volume. Typical volumes may include many more voxels, such as thousands or millions.
[0047] One approach to visualizing a volume is to display orthogonal slices. In
FIG. 3.2, an orthogonal slice of the volume with i=2 is shown such that voxels on the slice have the same i value. In FIG. 3.3, a slice with j=2 is shown. In FIG. 3.4, a slice with k=3 is shown. [0048] FIG. 3.5 illustrates a screenshot A (351) of a seismic volume and screenshot B (352), screenshot C (353), and screenshot D (354) of the seismic volume's slices along 3 orthogonal directions. More generally, a volume slice may be defined as a linear surface in any direction, not necessarily perpendicular to any of the volume's three orthogonal axes. While the examples are given in connection with an orthogonal slice, the approach can be used on any volume slice. In some embodiments, the volume may have a curvilinear coordinate system and the volume slice may be defined as a curved surface.
[0049] Generally, when visualizing a volume slice, just the data on the slice is visible. When doing more detailed analysis, it is often desirable to display more surrounding data at a given location on the slice. In one embodiment, the disclosed approach is used to show surrounding data. The approach may provide a way to simultaneously inspect volume data on multiple different directions at a location. FIG. 3.6 illustrates a volume slice i=2 with a point indicated in the middle of the slice (identified by the arrow). In certain embodiments, the arrow is displayed as a cursor to identify the point of interest. FIG. 3.7 illustrates a small rectangular panel used to perpendicularly cut into the volume slice at the point. The rectangular panel may be referred to as a cross-cutting panel (371). The size of the cross-cutting panel (371) may have a default value. The size may be customizable. FIG. 3.8 illustrates the cross-cutting display (381) that displays the data extracted from the cross-cutting panel (371). In certain embodiments, as a user moves the cursor, the cross-cutting display tracks the cursor to its new location by repeating the process at a new cursor location. FIG. 3.9 illustrates that, in certain embodiments, the angle (i.e., cross-cutting angle (391)) at which the cross-cutting panel intersects with the volume slice may be varied.
[0050] FIG. 3.10 illustrates a scenario in which the cross-cutting panel (311) intersects with the volume slice (312) in a 90° angle, with FIG. 3.11 illustrating an example on a seismic volume (313) where grey scale shading represents data extracted from the volume slice (312) except within the cross-cutting display (314). The grey scale shading within the cross-cutting display (314) represents data extracted from the cross-cutting panel (311). The cross-cutting angle may, in certain embodiments, be continuously varied at a given cursor location to inspect its surroundings at all angles.
[0051] FIG. 3.12 illustrates a scenario in which the cross-cutting panel (321) intersects with the volume slice (322) in a 45° angle, with FIG. 3.13 illustrating an example on a seismic volume (323). The 45° cross-cutting angle may, in certain embodiments, be displayed to the user using an angle indicator (324), which is a short line segment oriented at 45° with respect to the cross-cutting display (325). As the cross-cutting angle is continuously varied by the user, the angle indicator (324) rotates accordingly as the user inspects data surrounding the cursor location at all angles.
[0052] Cross-cutting cursor tracking may also be used to display different types of volume attributes. In certain embodiments, real time computed attribute data may be displayed on the cross-cutting panel. FIG. 3.14 illustrates a seismic volume slice (341) with cross-cutting display (342) of its instantaneous frequency attribute.
[0053] In certain embodiments, a cross-cutting panel user interface may provide a user with options to configure the cross-cutting panel. In one embodiment, the user interface may allow the user to vary the location of the cursor position that identifies a point-of-interest in the volume. The user interface may also allow the user to change the cross-cutting angle or otherwise rotate the cross-cutting plane associating with the cross-cutting panel.
[0054] Embodiments may be implemented on a computing system. Any combination of mobile, desktop, server, router, switch, embedded device, or other types of hardware may be used. For example, as shown in FIG. 4.1, the computing system (400) may include one or more computer processors (402), non-persistent storage (404) (e.g., volatile memory, such as random access memory (RAM), cache memory), persistent storage (406) (e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.), a communication interface (412) (e.g., Bluetooth interface, infrared interface, network interface, optical interface, etc.), and numerous other elements and functionalities.
[0055] The computer processor(s) (402) may be an integrated circuit for processing instructions. For example, the computer processor(s) may be one or more cores or micro-cores of a processor. The computing system (400) may also include one or more input devices (410), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.
[0056] The communication interface (412) may include an integrated circuit for connecting the computing system (400) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.
[0057] Further, the computing system (400) may include one or more output devices (408), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device. One or more of the output devices may be the same or different from the input device(s). The input and output device(s) may be locally or remotely connected to the computer processor(s) (402), non-persistent storage (404), and persistent storage (406). Many different types of computing systems exist, and the aforementioned input and output device(s) may take other forms. [0058] Software instructions in the form of computer readable program code to perform embodiments of the invention may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a CD, DVD, storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium. Specifically, the software instructions may correspond to computer readable program code that, when executed by a processor(s), is configured to perform one or more embodiments of the invention.
[0059] The computing system (400) in FIG. 4.1 may be connected to or be a part of a network. For example, as shown in FIG. 4.2, the network (420) may include multiple nodes (e.g., node X (422), node Y (424)). Each node may correspond to a computing system, such as the computing system shown in FIG. 4.1, or a group of nodes combined may correspond to the computing system shown in FIG. 4.1. By way of an example, embodiments of the invention may be implemented on a node of a distributed system that is connected to other nodes. By way of another example, embodiments of the invention may be implemented on a distributed computing system having multiple nodes, where each portion of the invention may be located on a different node within the distributed computing system. Further, one or more elements of the aforementioned computing system (400) may be located at a remote location and connected to the other elements over a network.
[0060] Although not shown in FIG. 4.2, the node may correspond to a blade in a server chassis that is connected to other nodes via a backplane. By way of another example, the node may correspond to a server in a data center. By way of another example, the node may correspond to a computer processor or micro- core of a computer processor with shared memory and/or resources. [0061] The nodes (e.g., node X (422), node Y (424)) in the network (420) may be configured to provide services for a client device (426). For example, the nodes may be part of a cloud computing system. The nodes may include functionality to receive requests from the client device (426) and transmit responses to the client device (426). The client device (426) may be a computing system, such as the computing system shown in FIG. 4.1. Further, the client device (426) may include and/or perform all or a portion of one or more embodiments of the invention.
[0062] The computing system or group of computing systems described in FIG.
4.1 and 4.2 may include functionality to perform a variety of operations disclosed herein. For example, the computing system(s) may perform communication between processes on the same or different system. A variety of mechanisms, employing some form of active or passive communication, may facilitate the exchange of data between processes on the same device. Examples representative of these inter-process communications include, but are not limited to, the implementation of a file, a signal, a socket, a message queue, a pipeline, a semaphore, shared memory, message passing, and a memory-mapped file. Further details pertaining to a couple of these non-limiting examples are provided below.
[0063] Based on the client-server networking model, sockets may serve as interfaces or communication channel end-points enabling bidirectional data transfer between processes on the same device. Foremost, following the client- server networking model, a server process (e.g., a process that provides data) may create a first socket object. Next, the server process binds the first socket object, thereby associating the first socket object with a unique name and/or address. After creating and binding the first socket object, the server process then waits and listens for incoming connection requests from one or more client processes (e.g., processes that seek data). At this point, when a client process wishes to obtain data from a server process, the client process starts by creating a second socket object. The client process then proceeds to generate a connection request that includes at least the second socket object and the unique name and/or address associated with the first socket object. The client process then transmits the connection request to the server process. Depending on availability, the server process may accept the connection request, establishing a communication channel with the client process, or the server process, busy in handling other operations, may queue the connection request in a buffer until server process is ready. An established connection informs the client process that communications may commence. In response, the client process may generate a data request specifying the data that the client process wishes to obtain. The data request is subsequently transmitted to the server process. Upon receiving the data request, the server process analyzes the request and gathers the requested data. Finally, the server process then generates a reply including at least the requested data and transmits the reply to the client process. The data may be transferred, more commonly, as datagrams or a stream of characters (e.g., bytes). Shared memory refers to the allocation of virtual memory space in order to substantiate a mechanism for which data may be communicated and/or accessed by multiple processes. In implementing shared memory, an initializing process first creates a shareable segment in persistent or non-persistent storage. Post creation, the initializing process then mounts the shareable segment, subsequently mapping the shareable segment into the address space associated with the initializing process. Following the mounting, the initializing process proceeds to identify and grant access permission to one or more authorized processes that may also write and read data to and from the shareable segment. Changes made to the data in the shareable segment by one process may immediately affect other processes, which are also linked to the shareable segment. Further, when one of the authorized processes accesses the shareable segment, the shareable segment maps to the address space of that authorized process. Often, only one authorized process may mount the shareable segment, other than the initializing process, at any given time.
[0065] Other techniques may be used to share data, such as the various data described in the present application, between processes without departing from the scope of the invention. The processes may be part of the same or different application and may execute on the same or different computing system.
[0066] Rather than or in addition to sharing data between processes, the computing system performing one or more embodiments of the invention may include functionality to receive data from a user. For example, in one or more embodiments, a user may submit data via a graphical user interface (GUI) on the user device. Data may be submitted via the graphical user interface by a user selecting one or more graphical user interface widgets or inserting text and other data into graphical user interface widgets using a touchpad, a keyboard, a mouse, or any other input device. In response to selecting a particular item, information regarding the particular item may be obtained from persistent or non-persistent storage by the computer processor. Upon selection of the item by the user, the contents of the obtained data regarding the particular item may be displayed on the user device in response to the user's selection.
[0067] By way of another example, a request to obtain data regarding the particular item may be sent to a server operatively connected to the user device through a network. For example, the user may select a uniform resource locator (URL) link within a web client of the user device, thereby initiating a Hypertext Transfer Protocol (HTTP) or other protocol request being sent to the network host associated with the URL. In response to the request, the server may extract the data regarding the particular selected item and send the data to the device that initiated the request. Once the user device has received the data regarding the particular item, the contents of the received data regarding the particular item may be displayed on the user device in response to the user's selection. Further to the above example, the data received from the server after selecting the URL link may provide a web page in Hyper Text Markup Language (HTML) that may be rendered by the web client and displayed on the user device.
[0068] Once data is obtained, such as by using techniques described above or from storage, the computing system, in performing one or more embodiments of the invention, may extract one or more data items from the obtained data. For example, the extraction may be performed as follows by the computing system in FIG. 4.1. First, the organizing pattern (e.g., grammar, schema, layout) of the data is determined, which may be based on one or more of the following: position (e.g., bit or column position, Nth token in a data stream, etc.), attribute (where the attribute is associated with one or more values), or a hierarchical/tree structure (having layers of nodes at different levels of detail— such as in nested packet headers or nested document sections). Then, the raw, unprocessed stream of data symbols is parsed, in the context of the organizing pattern, into a stream (or layered structure) of tokens (where each token may have an associated token "type").
[0069] Next, extraction criteria are used to extract one or more data items from the token stream or structure, where the extraction criteria are processed according to the organizing pattern to extract one or more tokens (or nodes from a layered structure). For position-based data, the token(s) at the position(s) identified by the extraction criteria are extracted. For attribute/value-based data, the token(s) and/or node(s) associated with the attribute(s) satisfying the extraction criteria are extracted. For hierarchical/layered data, the token(s) associated with the node(s) matching the extraction criteria are extracted. The extraction criteria may be as simple as an identifier string or may be a query presented to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML).
[0070] The extracted data may be used for further processing by the computing system. For example, the computing system of FIG. 4.1, while performing one or more embodiments of the invention, may perform data comparison. Data comparison may be used to compare two or more data values (e.g., A, B). For example, one or more embodiments may determine whether A > B, A = B, A != B, A < B, etc. The comparison may be performed by submitting A, B, and an opcode specifying an operation related to the comparison into an arithmetic logic unit (ALU) (i.e., circuitry that performs arithmetic and/or bitwise logical operations on the two data values). The ALU outputs the numerical result of the operation and/or one or more status flags related to the numerical result. For example, the status flags may indicate whether the numerical result is a positive number, a negative number, zero, etc. By selecting the proper opcode and then reading the numerical results and/or status flags, the comparison may be executed. For example, in order to determine if A > B, B may be subtracted from A (i.e., A - B), and the status flags may be read to determine if the result is positive (i.e., if A > B, then A - B > 0). In one or more embodiments, B may be considered a threshold, and A is deemed to satisfy the threshold if A = B or if A > B, as determined using the ALU. In one or more embodiments of the invention, A and B may be vectors, and comparing A with B involves comparing the first element of vector A with the first element of vector B, the second element of vector A with the second element of vector B, etc. In one or more embodiments, if A and B are strings, the binary values of the strings may be compared.
[0071] The computing system in FIG. 4.1 may implement and/or be connected to a data repository. For example, one type of data repository is a database. A database is a collection of information configured for ease of data retrieval, modification, re-organization, and deletion. Database Management System (DBMS) is a software application that provides an interface for users to define, create, query, update, or administer databases.
[0072] The user, or software application, may submit a statement or query into the
DBMS. Then the DBMS interprets the statement. The statement may be a select statement to request information, update statement, create statement, delete statement, etc. Moreover, the statement may include parameters that specify data, or data container (database, table, record, column, view, etc.), identifier(s), conditions (comparison operators), functions (e.g. join, full join, count, average, etc.), sort (e.g. ascending, descending), or others. The DBMS may execute the statement. For example, the DBMS may access a memory buffer, a reference or index of a file for read, write, deletion, or any combination thereof, for responding to the statement. The DBMS may load the data from persistent or non-persistent storage and perform computations to respond to the query. The DBMS may return the result(s) to the user or software application.
[0073] The computing system of FIG. 4.1 may include functionality to present raw and/or processed data, such as results of comparisons and other processing. For example, presenting data may be accomplished through various presenting methods. Specifically, data may be presented through a user interface provided by a computing device. The user interface may include a GUI that displays information on a display device, such as a computer monitor or a touchscreen on a handheld computer device. The GUI may include various GUI widgets that organize what data is shown as well as how data is presented to a user. Furthermore, the GUI may present data directly to the user, e.g., data presented as actual data values through text, or rendered by the computing device into a visual representation of the data, such as through visualizing a data model.
[0074] For example, a GUI may first obtain a notification from a software application requesting that a particular data object be presented within the GUI. Next, the GUI may determine a data object type associated with the particular data object, e.g., by obtaining data from a data attribute within the data object that identifies the data object type. Then, the GUI may determine any rules designated for displaying that data object type, e.g., rules specified by a software framework for a data object class or according to any local parameters defined by the GUI for presenting that data object type. Finally, the GUI may obtain data values from the particular data object and render a visual representation of the data values within a display device according to the designated rules for that data object type.
[0075] Data may also be presented through various audio methods. In particular, data may be rendered into an audio format and presented as sound through one or more speakers operably connected to a computing device.
[0076] Data may also be presented to a user through haptic methods. For example, haptic methods may include vibrations or other physical signals generated by the computing system. For example, data may be presented to a user using a vibration generated by a handheld computer device with a predefined duration and intensity of the vibration to communicate the data.
[0077] The above description of functions presents only a few examples of functions performed by the computing system of FIG. 4.1 and the nodes and/or client device in FIG. 4.2. Other functions may be performed using one or more embodiments of the invention.
[0078] While one or more embodiments have been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments may be devised which do not depart from the scope as disclosed herein. Accordingly, the scope should be limited only by the attached claims.

Claims

CLAIMS What is claimed is:
1. A method for performing a field operation of a field, comprising:
obtaining a volume dataset comprising a plurality of data items of the field, wherein each of the plurality of data items is assigned to a position in a three dimensional (3D) volume and corresponds to a location in the field; obtaining a cursor position in a first volume slice of the volume dataset, wherein the first volume slice corresponds to at least a first portion of a first cross- section of the volume;
extracting, from the volume dataset and in response to obtaining the cursor position, a second volume slice of the volume dataset, wherein the second volume slice corresponds to a second portion of a second cross-section of the volume, wherein the second volume slice intersects the first volume slice at the cursor position;
displaying the first volume slice on a display; and
displaying the second volume slice on the display, the second volume slice blocking, in the display, a third portion of the first volume slice a distance away from the cursor position.
2. The method of claim 1, further comprising:
receiving, in response to displaying the first volume slice and the second volume slice, an input from a user;
generating a control signal based on the input; and
performing the field operation based on the control signal.
3. The method of claim 1, further comprising: receiving a cross-cutting angle associated with the cursor position, wherein the cross-cutting angle defines a geometrical relationship between the first cross-section and the second cross-section; and
identifying the second volume slice based on the first volume slice and the cross- cutting angle.
4. The method of claim 1, further comprising:
receiving a panel size associated with the cursor position, wherein the panel size defines a geometrical size of the second portion of the second cross-section; and
identifying the second volume slice based on the first volume slice and the panel size.
5. The method of claim 1, further comprising:
generating a composite image of the first volume slice and the second volume slice, wherein the third portion of the first volume slice is not obscured by the second volume slice in the composite image,
wherein displaying the first volume slice and the second volume slice comprises displaying the composite image.
6. The method of claim 5, further comprising:
receiving, subsequent to receiving the cursor position, a modified cursor position; generating a modified composite image based on the modified cursor position; and displaying, within a pre-determined time period subsequent to receiving the modified cursor position, the modified composite image.
7. The method of claim 1, wherein at least one selected from the first cross-section and the second cross-section is based on a linear surface within the volume.
8. The method of claim 1, wherein at least one selected from the first cross-section and the second cross-section is based on a curvilinear surface within the volume.
9. A system for performing a field operation of a field, comprising:
an exploration and production (E&P) computer system, comprising:
a computer processor;
memory storing instructions executed by the computer processor, wherein the instructions comprise functionality to:
obtain a volume dataset comprising a plurality of data items of the field, wherein each of the plurality of data items is assigned to a position in a three dimensional (3D) volume and corresponds to a location in the field;
obtain a cursor position in a first volume slice of the volume dataset, wherein the first volume slice corresponds to at least a first portion of a first cross-section of the volume; and
extract, from the volume dataset and in response to obtaining the cursor position, a second volume slice of the volume dataset, wherein the second volume slice corresponds to a second portion of a second cross-section of the volume, wherein the second volume slice intersects the first volume slice at the cursor position;
a display device configured to:
displaying the first volume slice; and
displaying the second volume slice blocking a third portion of the first volume slice a distance away from the cursor position; and
a repository for storing the volume dataset.
10. The system of claim 9, further comprising: field equipment coupled to the E&P computer system and configured to perform the field operation based on a control signal,
wherein the instructions further comprise functionality to:
receive, in response to displaying the first volume slice and the second volume slice, an input from a user; and
generate the control signal based on the input.
11. The system of claim 9, wherein the instructions further comprise functionality to: receive a cross-cutting angle associated with the cursor position, wherein the cross-cutting angle defines a geometrical relationship between the first cross-section and the second cross-section; and
identify the second volume slice based on the first volume slice and the cross- cutting angle.
12. The system of claim 9, wherein the instructions further comprise functionality to: receive a panel size associated with the cursor position, wherein the panel size defines a geometrical size of the second portion of the second cross-section; and
identify the second volume slice based on the first volume slice and the panel size.
13. The system of claim 9, wherein the instructions further comprise functionality to: generate a composite image of the first volume slice and the second volume slice, wherein the third portion of the first volume slice is not obscured by the second volume slice in the composite image,
wherein displaying the first volume slice and the second volume slice comprises displaying the composite image.
14. The system of claim 13, wherein the instructions further comprise functionality to: receive, subsequent to receiving the cursor position, a modified cursor position; generate a modified composite image based on the modified cursor position; and display, within a pre-determined time period subsequent to receiving the modified cursor position, the modified composite image.
15. The system of claim 9, wherein at least one selected from the first cross-section and the second cross-section is based on a linear surface within the volume.
16. The system of claim 9, wherein at least one selected from the first cross-section and the second cross-section is based on a curvilinear surface within the volume.
17. A non-transitory computer readable medium comprising computer readable program code for:
obtaining a volume dataset comprising a plurality of data items of a field, wherein each of the plurality of data items is assigned to a position in a three dimensional (3D) volume and corresponds to a location in the field;
obtaining a cursor position in a first volume slice of the volume dataset, wherein the first volume slice corresponds to at least a first portion of a first cross- section of the volume;
extracting, from the volume dataset and in response to obtaining the cursor position, a second volume slice of the volume dataset, wherein the second volume slice corresponds to a second portion of a second cross-section of the volume, wherein the second volume slice intersects the first volume slice at the cursor position;
displaying the first volume slice on a display; and
displaying the second volume slice on the display, the second volume slice blocking, in the display, a third portion of the first volume slice a distance away from the cursor position.
18. The non-transitory computer readable medium of claim 17, further comprising computer readable program code for: receiving, in response to displaying the first volume slice and the second volume slice, an input from a user;
generating a control signal based on the input; and
performing a field operation based on the control signal.
19. The non-transitory computer readable medium of claim 17, further comprising computer readable program code for:
receiving a cross-cutting angle associated with the cursor position, wherein the cross-cutting angle defines a geometrical relationship between the first cross-section and the second cross-section; and
identifying the second volume slice based on the first volume slice and the cross- cutting angle.
20. The non-transitory computer readable medium of claim 17, further comprising computer readable program code for:
receiving a panel size associated with the cursor position, wherein the panel size defines a geometrical size of the second portion of the second cross-section; and
identifying the second volume slice based on the first volume slice and the panel size.
PCT/US2016/018936 2015-02-23 2016-02-22 Visualizing datasets WO2016137888A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562119597P 2015-02-23 2015-02-23
US62/119,597 2015-02-23

Publications (1)

Publication Number Publication Date
WO2016137888A1 true WO2016137888A1 (en) 2016-09-01

Family

ID=56789861

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/018936 WO2016137888A1 (en) 2015-02-23 2016-02-22 Visualizing datasets

Country Status (1)

Country Link
WO (1) WO2016137888A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6211674B1 (en) * 1999-05-14 2001-04-03 General Electric Company Method and system for providing a maximum intensity projection of a non-planar image
WO2001023911A1 (en) * 1999-09-30 2001-04-05 Shell Internationale Research Maatschappij B.V. Method and apparatus for multi-dimensional data modelling and analysis using a haptic interface device
US20080049553A1 (en) * 2006-08-28 2008-02-28 Anil Chopra Method and apparatus for seismic data interpretation using 3D overall view
US20110115787A1 (en) * 2008-04-11 2011-05-19 Terraspark Geosciences, Llc Visulation of geologic features using data representations thereof
US20110247829A1 (en) * 2008-10-24 2011-10-13 Dobin Mark W Tracking geologic object and detecting geologic anomalies in exploration seismic data volume

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6211674B1 (en) * 1999-05-14 2001-04-03 General Electric Company Method and system for providing a maximum intensity projection of a non-planar image
WO2001023911A1 (en) * 1999-09-30 2001-04-05 Shell Internationale Research Maatschappij B.V. Method and apparatus for multi-dimensional data modelling and analysis using a haptic interface device
US20080049553A1 (en) * 2006-08-28 2008-02-28 Anil Chopra Method and apparatus for seismic data interpretation using 3D overall view
US20110115787A1 (en) * 2008-04-11 2011-05-19 Terraspark Geosciences, Llc Visulation of geologic features using data representations thereof
US20110247829A1 (en) * 2008-10-24 2011-10-13 Dobin Mark W Tracking geologic object and detecting geologic anomalies in exploration seismic data volume

Similar Documents

Publication Publication Date Title
US11467300B2 (en) Multi-scale deep network for fault detection
US11775858B2 (en) Runtime parameter selection in simulations
US11269110B2 (en) Computing system assessment of geological similarity of wells employing well-log data
US20210125312A1 (en) Artificial intelligence technique to fill missing well data
NO347918B1 (en) Efficient algorithms for volume visualization on irregular grids
EP3500916B1 (en) Graphical representation management
US11227372B2 (en) Geological imaging and inversion using object storage
US11592590B2 (en) Well log channel matching
US20240029176A1 (en) Automatic Recognition of Drilling Activities Based on Daily Reported Operational Codes
EP3469404B1 (en) Structural volume segmentation
WO2016137888A1 (en) Visualizing datasets
US11422874B2 (en) Visualization infrastructure for web applications
US11803530B2 (en) Converting uni-temporal data to cloud based multi-temporal data
EP3510425B1 (en) Well infiltration area calculation using logging while drilling data
WO2017053080A1 (en) Subsurface volume evaluation
WO2023034978A1 (en) User interface for presenting multi-level map clusters
EP4396666A1 (en) User interface for presenting multi-level map clusters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16756121

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16756121

Country of ref document: EP

Kind code of ref document: A1