Seismic signal processing can be summarized in three main steps:

- The first step is the pre-processing of the data and the application of static corrections. The purpose of pre-processing is to extract reflected waves from individual shots, by filtering out the parasitic events created by direct and refracted arrivals, surface waves, converted waves, multiples and noise. It is intended to compensate for amplitude losses related to propagation. Records are then sorted in common mid-point gathers or common offset gathers.
- The second processing step is the conversion of common mid-point gathers or common offset gathers into time or depth migrated seismic amplitudes traces. This second step includes the determination of the velocity model, with the use of stacking velocity analyses, or tomography methods.
- The third step is the inversion of time or depth seismic amplitude traces into geological petrophysical and geomechanical properties at the scale of the reservoir simulators.

The classical approach to seismic processing is called deterministic processing because it relies on the application of deterministic geophysical laws to the processing of recorded seismic on shore or offshore acquisitions.

- Deterministic signal processing is an empirical answer to the problem of extracting valuable seismic “signal” from a set of recorded measurements in the time domain and converting it into reliable reservoir “properties” in the depth domain.
- It has to deal with noise removal and limited resolution of seismic images that impact their reliability for characterizing reservoirs.

Stochastic signal processing offers a consistent mathematic framework (a probability model) to optimize the parameterization of geophysical laws involved in the processing and at the same time provide a quantification of the reliability of the processing (uncertainty management).

- Stochastic signal processing provides an objective answer to the problem of the reliability of seismic images for reservoir characterization. Instead of providing a single processing output, it provides a probability distribution of it.
- Stochastic signal processing quantifies the reliability of a processing step in terms of confidence intervals attached to the processing output, whether uncertainty comes from the presence of noise or from limited resolution of the seismic recording.

Some critical steps of deterministic processing have been translated into their stochastic counter parts such as: Quality control, Filtering, Wave separation, stacking, time depth conversion or volumetrics computations, others are underway such as: tomography, migration, velocity modelling and inversion.

In the session, we give the fundamentals on building probability models, on operating Stochastic Signal Processing, and we show how Probability Models are used in seismic processing:

The Theory of regionalized variables

Probability models

Variograms

Kriging

Simulations

Spatial Data Quality Assessment

Spatial Data Conditioning

Stochastic wave separation

Stochastic stack

Depth conversion

Special thanks to Luc Sandjivy and Arben Shtuka for their advice and invaluable help in creating these lessons and in providing the field data. The different data set have been processed thanks to UDOMORE software (http://www.seisquare.com/)