Reservoir engineering has the task to study the behavior and the characteristics of an oil or gas reservoir so as to determine the strategy of future development and production that would maximize the profit. History matching represents a fundamental step in view of reservoir production forecasting and quantification of the associated uncertainty and it is definitely the most challenging phase of reservoir simulation. In fact, the team performing the calibration has to deal with the non-uniqueness issue, as history matching is an ill-posed inverse problem due to the insufficient constraints and data. Some years ago, a new methodology called automatic history matching was approached by the scientific community. The idea consisted in treating history matching as an optimization process, i.e. defining an objective function representative of the discrepancy between measured (real) and simulated data, and in minimizing the objective function. In the last years, the scientific community has taken a great leap forward in the automation of the history matching process to calibrate dynamic reservoir models. Some of the methods have been inherited from other scientific disciplines; others have been constructed ad-hoc for the history matching problem. But at the moment, there is no clear winner. The workflow of reservoir studies has simultaneously evolved boosting the integration among disciplines and somewhat reducing the time needed for the calibration of a model. Therefore, one would expect that assisted history matching would be the standard but the truth is that history matching is still a very knotty problem. Yet, the possibility to exploit assisted history matching procedures is a key step to find multiple calibrated reservoir models. The importance of obtaining a number of solution to the inversion problem is that they can be used for field performance prediction so as to obtain a representative evaluation of the risks associated to any given reservoir development scenario. The aim of this research consisted in discussing benefits, limitations and drawbacks of assisted history matching when using different techniques such as such as evolutionary multi-objective optimization and data assimilation methodologies. Furthermore, the research was devoted to prove that a single calibrated model approach can lead to large errors in the production forecast and that only a true probabilistic approach, based on multiple calibrated reservoir models relying on different geological configurations, can assess the uncertainty affecting the reservoir productivity and final hydrocarbon recovery. Three reservoir cases subject to assisted history were studied. The case studies have increasing complexity and were selected to solve problems progressively more challenging in terms of historical data to be assimilated and number of parameters to be calibrated. An innovative framework for the calibration of reservoir numerical models was conceived and applied to the first case study. The study was triggered by the need for an algorithm capable of taking advantage of a network of computers to process the large number of simulations required by a multi-objective optimization process. The multi-objective optimization is a process for simultaneously optimizing two or more conflicting objectives subject to certain constraints. A multi-objective evolutionary algorithm (SPEA2) for assisted history matching was implemented. Evolutionary algorithms are based on the theory of the evolution and behavior of living species. The interaction of the users with the optimization algorithm was addressed by the introduction of an innovative fitness function definition that includes a “social” or user contribution to facilitate the exploration of the solution space in the neighborhood of the user-selected candidate solutions. The developed algorithm was coupled to a collective computation network. In order to overcome the licensing restrictions of the available commercial reservoir simulators a black-oil two-phase numerical simulator based on the finite volume method was also developed. The results showed that the SPEA2 algorithm was capable of finding a representative set of optimal solutions that matched the historical data. The implemented workflow also provided the possibility, for competent users, to steer the selection of the fittest individuals or solutions by selecting preferred or “liked” solutions. Furthermore, the optimization method showed a good degree of scalability, demonstrating the importance of parallelization for speeding up the calibration process. In the second case study the calibration process was carried out by using both the adaptive Gaussian mixture filter (AGM) and the ensemble Kalman filter (EnKF). The adaptive Gaussian mixture filter is an ensemble-based approach for sequential data assimilation where different weights are associated to each ensemble member, the ensemble Kalman filter is characterized by uniform member weights. The methodology was applied to the PUNQ-S3 model, a medium-complexity synthetic reservoir model constructed from a real field study and made available to the scientific community by the oil companies. More than 5000 parameters were calibrated so as to match the historical production data coming from 5 different wells. The tested methods showed a remarkably low computational cost compared with global optimization methods. Both the EnKF and AGM methods provided a fairly good estimate of the cumulative oil production in the forecast phase and effectively reduced the distribution spread of the observed data from the initial to the final models. AGM proved to be better suited than the EnKF for the estimation of the petrophysical distributions. However, the bandwidth parameter of the AGM should be tuned for the method to provide outstanding results. In the third case study the analysis was focused at the assessment of the uncertainty associated with the facies distribution in a synthetic hydrocarbon reservoir comprising sand channels and clayey floodplain as in a typical fluvial depositional environment. Three different conceptual models of the internal reservoir geometry were defined. A conceptual model is intended as a given set of geometrical parameters that are used to stochastically generate different facies realizations by distributing the hydrocarbon-bearing facies within the model domain. A total of nearly 80,000 parameters were calibrated by the assimilation of the data coming from the producers and injectors wells. Results proved the importance of achieving multiple solutions so as to obtain a meaningful quantification of the uncertainty associated to the production forecast for successive technical and economic risk analysis. The proposed AHM workflow efficiently reduced the uncertainty associated to the initial set of generated models by conditioning them to the dynamic data and significantly shifted the ensemble median towards the true value. It was also demonstrated that a good fit of the production data does not necessarily provide a good estimation of the parameters of the reservoir. This result further confirmed that a deterministic history matching approach can be very misleading because wrong production forecast can be obtained from a nicely calibrated model. Although the application to the three case studies proved the efficiency of the conceived workflow, it only seems to be the end of the beginning rather than the beginning of the end. The user interaction to apply the new tools is even more important and more interdisciplinary than in manual HM workflows because the parameters of the algorithms for assisted history matching need to be tuned for the methods to work efficiently. The evolution of the technology and the “market” should guarantee a smooth transition between manual and assisted history matching expected in the following years. In this research novel solutions were presented to address the main known issues limiting the application of the AHM to real cases. With the proposed workflow reservoir studies can be significantly enhanced by generating several solutions that truly take into account the uncertainty of the data used for the construction and calibration of the model and, at the same, the time necessary for the calibration of the models can be reduced. Therefore, the application of the investigated techniques can lead to more representative production and economic forecasts. All the professionals involved in a reservoir study (i.e. geologists, geophysicists, reservoir engineers, etc.) can benefit from the proposed workflow because it adds additional information extracted from dynamic data to the modeling process, closing the loop and leading to the ultimate integrated approach. In the thesis novel techniques and methods were developed. The innovations were focused on three main areas: parallel computation, algorithm selection and uncertainty assessment. Grid computing is considered one of the most important technological development of this decade. In fact, there is a growing interest of big companies such Amazon (Amazon EC2 elastic cloud computing) and Google (Google Cloud computing) to provide huge amount of CPU cores in order to deal with computational intensive tasks. In this thesis parallel computing was used to accelerate the calibration of reservoir numerical models using heuristic algorithms guided by user interactions. This approach provides the possibility to handle high resolution reservoir models that can be calibrated in a fraction of the time required by the traditional approach. The traditional gradient based methods have the advantage of converge rapidly towards a minimum; however, they can provide only one solution which can be significantly far from the “real” solution. In this thesis a new generation of sequential filters were tested, namely the Ensemble Kalman Filter and Adaptive Gaussian Mixture. Both algorithms provide multiple calibrated models that can better assess the uncertainty of the input parameters and offer a solution to the non-uniqueness issue related to history matching. The novel workflow, including the implemented algorithms, was then tested to calibrate a set of fluvial-depositional reservoir models under multiple facies distributions.

Benefits and Limitations of Assisted History Matching: The optimization crux / Cancelliere, MICHEL ALEXANDER. - (2012).

Benefits and Limitations of Assisted History Matching: The optimization crux

CANCELLIERE, MICHEL ALEXANDER
2012

Abstract

Reservoir engineering has the task to study the behavior and the characteristics of an oil or gas reservoir so as to determine the strategy of future development and production that would maximize the profit. History matching represents a fundamental step in view of reservoir production forecasting and quantification of the associated uncertainty and it is definitely the most challenging phase of reservoir simulation. In fact, the team performing the calibration has to deal with the non-uniqueness issue, as history matching is an ill-posed inverse problem due to the insufficient constraints and data. Some years ago, a new methodology called automatic history matching was approached by the scientific community. The idea consisted in treating history matching as an optimization process, i.e. defining an objective function representative of the discrepancy between measured (real) and simulated data, and in minimizing the objective function. In the last years, the scientific community has taken a great leap forward in the automation of the history matching process to calibrate dynamic reservoir models. Some of the methods have been inherited from other scientific disciplines; others have been constructed ad-hoc for the history matching problem. But at the moment, there is no clear winner. The workflow of reservoir studies has simultaneously evolved boosting the integration among disciplines and somewhat reducing the time needed for the calibration of a model. Therefore, one would expect that assisted history matching would be the standard but the truth is that history matching is still a very knotty problem. Yet, the possibility to exploit assisted history matching procedures is a key step to find multiple calibrated reservoir models. The importance of obtaining a number of solution to the inversion problem is that they can be used for field performance prediction so as to obtain a representative evaluation of the risks associated to any given reservoir development scenario. The aim of this research consisted in discussing benefits, limitations and drawbacks of assisted history matching when using different techniques such as such as evolutionary multi-objective optimization and data assimilation methodologies. Furthermore, the research was devoted to prove that a single calibrated model approach can lead to large errors in the production forecast and that only a true probabilistic approach, based on multiple calibrated reservoir models relying on different geological configurations, can assess the uncertainty affecting the reservoir productivity and final hydrocarbon recovery. Three reservoir cases subject to assisted history were studied. The case studies have increasing complexity and were selected to solve problems progressively more challenging in terms of historical data to be assimilated and number of parameters to be calibrated. An innovative framework for the calibration of reservoir numerical models was conceived and applied to the first case study. The study was triggered by the need for an algorithm capable of taking advantage of a network of computers to process the large number of simulations required by a multi-objective optimization process. The multi-objective optimization is a process for simultaneously optimizing two or more conflicting objectives subject to certain constraints. A multi-objective evolutionary algorithm (SPEA2) for assisted history matching was implemented. Evolutionary algorithms are based on the theory of the evolution and behavior of living species. The interaction of the users with the optimization algorithm was addressed by the introduction of an innovative fitness function definition that includes a “social” or user contribution to facilitate the exploration of the solution space in the neighborhood of the user-selected candidate solutions. The developed algorithm was coupled to a collective computation network. In order to overcome the licensing restrictions of the available commercial reservoir simulators a black-oil two-phase numerical simulator based on the finite volume method was also developed. The results showed that the SPEA2 algorithm was capable of finding a representative set of optimal solutions that matched the historical data. The implemented workflow also provided the possibility, for competent users, to steer the selection of the fittest individuals or solutions by selecting preferred or “liked” solutions. Furthermore, the optimization method showed a good degree of scalability, demonstrating the importance of parallelization for speeding up the calibration process. In the second case study the calibration process was carried out by using both the adaptive Gaussian mixture filter (AGM) and the ensemble Kalman filter (EnKF). The adaptive Gaussian mixture filter is an ensemble-based approach for sequential data assimilation where different weights are associated to each ensemble member, the ensemble Kalman filter is characterized by uniform member weights. The methodology was applied to the PUNQ-S3 model, a medium-complexity synthetic reservoir model constructed from a real field study and made available to the scientific community by the oil companies. More than 5000 parameters were calibrated so as to match the historical production data coming from 5 different wells. The tested methods showed a remarkably low computational cost compared with global optimization methods. Both the EnKF and AGM methods provided a fairly good estimate of the cumulative oil production in the forecast phase and effectively reduced the distribution spread of the observed data from the initial to the final models. AGM proved to be better suited than the EnKF for the estimation of the petrophysical distributions. However, the bandwidth parameter of the AGM should be tuned for the method to provide outstanding results. In the third case study the analysis was focused at the assessment of the uncertainty associated with the facies distribution in a synthetic hydrocarbon reservoir comprising sand channels and clayey floodplain as in a typical fluvial depositional environment. Three different conceptual models of the internal reservoir geometry were defined. A conceptual model is intended as a given set of geometrical parameters that are used to stochastically generate different facies realizations by distributing the hydrocarbon-bearing facies within the model domain. A total of nearly 80,000 parameters were calibrated by the assimilation of the data coming from the producers and injectors wells. Results proved the importance of achieving multiple solutions so as to obtain a meaningful quantification of the uncertainty associated to the production forecast for successive technical and economic risk analysis. The proposed AHM workflow efficiently reduced the uncertainty associated to the initial set of generated models by conditioning them to the dynamic data and significantly shifted the ensemble median towards the true value. It was also demonstrated that a good fit of the production data does not necessarily provide a good estimation of the parameters of the reservoir. This result further confirmed that a deterministic history matching approach can be very misleading because wrong production forecast can be obtained from a nicely calibrated model. Although the application to the three case studies proved the efficiency of the conceived workflow, it only seems to be the end of the beginning rather than the beginning of the end. The user interaction to apply the new tools is even more important and more interdisciplinary than in manual HM workflows because the parameters of the algorithms for assisted history matching need to be tuned for the methods to work efficiently. The evolution of the technology and the “market” should guarantee a smooth transition between manual and assisted history matching expected in the following years. In this research novel solutions were presented to address the main known issues limiting the application of the AHM to real cases. With the proposed workflow reservoir studies can be significantly enhanced by generating several solutions that truly take into account the uncertainty of the data used for the construction and calibration of the model and, at the same, the time necessary for the calibration of the models can be reduced. Therefore, the application of the investigated techniques can lead to more representative production and economic forecasts. All the professionals involved in a reservoir study (i.e. geologists, geophysicists, reservoir engineers, etc.) can benefit from the proposed workflow because it adds additional information extracted from dynamic data to the modeling process, closing the loop and leading to the ultimate integrated approach. In the thesis novel techniques and methods were developed. The innovations were focused on three main areas: parallel computation, algorithm selection and uncertainty assessment. Grid computing is considered one of the most important technological development of this decade. In fact, there is a growing interest of big companies such Amazon (Amazon EC2 elastic cloud computing) and Google (Google Cloud computing) to provide huge amount of CPU cores in order to deal with computational intensive tasks. In this thesis parallel computing was used to accelerate the calibration of reservoir numerical models using heuristic algorithms guided by user interactions. This approach provides the possibility to handle high resolution reservoir models that can be calibrated in a fraction of the time required by the traditional approach. The traditional gradient based methods have the advantage of converge rapidly towards a minimum; however, they can provide only one solution which can be significantly far from the “real” solution. In this thesis a new generation of sequential filters were tested, namely the Ensemble Kalman Filter and Adaptive Gaussian Mixture. Both algorithms provide multiple calibrated models that can better assess the uncertainty of the input parameters and offer a solution to the non-uniqueness issue related to history matching. The novel workflow, including the implemented algorithms, was then tested to calibrate a set of fluvial-depositional reservoir models under multiple facies distributions.
2012
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2498975
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo