fmu.tools.rms package
Submodules
fmu.tools.rms.create_rft_ertobs module
create_rft_ertobs creates txt, obs and welldatefile for usage together with GENDATA in ERT assisted history match runs.
This script should be run from within RMS to be able to interpolate along well trajectories:
from fmu.tools.rms import create_rft_ertobs
SETUP = {...yourconfiguration...} # A Python dictionary
create_rft_ertobs.main(SETUP)
Required keys in the SETUP dictionary:
input_file or input_dframe: Path to CSV file with data for wellnames, dates wellpoint coordinates and pressure values.
Optional keys:
rft_prefix: Eclipse well names will be prefixed with this string
gridname: Name of grid in RMS project (for verification)
zonename: Name of zone parameter in RMS grid (if zone names should be verified)
welldatefile: Name of welldatefile, written to exportdir. Defaults to “well_date_rft.txt”
Result:
txt
files written to ert/input/observationsobs
files for each well written to ert/input/observationswell_date_rft.txt
- fmu.tools.rms.create_rft_ertobs.check_and_parse_config(config)[source]
Checks config, and returns a validated and defaults-filled config dictionary
- Return type:
Dict
[str
,Any
]
- fmu.tools.rms.create_rft_ertobs.get_well_coords(project, wellname, trajectory_name='Drilled trajectory')[source]
Extracts well coordinates as a (wellpoints, 3) numpy array from a ROXAPI RMS project reference.
- Parameters:
project – Roxapi project reference.
wellname (
str
) – Name of well as it exists in the RMS project
- fmu.tools.rms.create_rft_ertobs.strictly_downward(coords)[source]
Check if a well trajectory has absolutely no horizontal sections
- Parameters:
coords (
ndarray
) – n x 4 array of coordinates, only z values in 4th column is used.- Return type:
bool
- Returns:
True if there are no horizontals.
- fmu.tools.rms.create_rft_ertobs.interp_from_md(md_value, coords, interpolation='cubic')[source]
Function to interpolate East, North, TVD values of a well point defined by its name (wname) and its corresponding MD point (md_value).
The interpolation of the well trajectory will be linear if linear = True, using cubic spline otherwise.
The coords input array should consist of rows:
md, x, y, z
- Return type:
Tuple
[float
,float
,float
]- Returns:
x, y and z along the wellpath at requested measured depth value.
- fmu.tools.rms.create_rft_ertobs.interp_from_xyz(xyz, coords, interpolation='cubic')[source]
Interpolate MD value of a well point defined by its name (wellname) and its corresponding East, North, TVD values (xyz tuple)
The coords input array should consist of rows:
md, x, y, z
The interpolation of the TVD well trajectory will be used (with linear algorithm if linear = True, using cubic spline otherwise) if the well is strictly going downward (TVD series stricly increasing).
If the well is horizontal or in a hook shape, a projection of the point on the well trajectory will be used, using cubic spline interpolation of the well trajectory points.
- Parameters:
coords (
ndarray
) –xyz (
Tuple
[float
,float
,float
]) – x, y and z for the coordinate where MD is requested.interpolation (
str
) – Use either “cubic” (default) or “linear”
- Return type:
float
- Returns:
Measured depth value at (x,y,z)
- fmu.tools.rms.create_rft_ertobs.ertobs_df_to_files(dframe, exportdir='.', welldatefile='well_date_rft.txt', filename='rft_ertobs.csv')[source]
Exports data from a dataframe into ERT observations files. The input here is essentially the input dataframe, but where all “holes” in MD or XYZ have been filled.
If ZONE is included, it will be added to the output.
Gendata will read well_date_rft.txt and then each of the obs and txt files.
Syntax of the output of the .txt files are determined by semeio: https://github.com/equinor/semeio/blob/master/semeio/jobs/rft/utility.py#L21
semeio.jobs.rft.trajectory.load_from_file() will parse the
.txt
files. https://github.com/equinor/semeio/blob/master/semeio/jobs/rft/trajectory.py#L273This function produces such files:
<WELL_NAME>.txt # (for all wells) <WELL_NAME>_<REPORT_STEP>.obs well_date_rft.txt
<wellname>.txt is a hardcoded filename pattern in GENDATA_RFT (semeio) well_date_rft.txt is supplied as configuration to GENDATA_RFT (semeio)
The obs-files must match the GENERAL_OBSERVATION arguments in the ERT config.
In the welldatefile, a choice is made by this function on how to enumerate the REPORTSTEPS parameter, which is to be given to GENDATA in ERT config. Here it will be constructed on the enumeration of DATE for each given well, meaning the number will be 1 for the first date for each well, and 2 on the second DATE pr. well etc.
- Parameters:
dframe (
DataFrame
) – Contains data for RFT observations, one observations pr. rowexportdir (
str
) – Path to directory where export is to happen. Must exists.welldatefile (
str
) – Filename to write the “index” of observations to.filename (
str
) – Filename for raw CSV data, for future use in GENDATA_RFT
- Return type:
None
- fmu.tools.rms.create_rft_ertobs.fill_missing_md_xyz(dframe, coords_pr_well, interpolation='cubic')[source]
Fill missing MD or XYZ values in incoming dataframe, interpolating in given well trajectories.
- Parameters:
dframe (
DataFrame
) – Must contain WELL_NAME, EAST, NORTH, MD, TVDcoords_pr_well (
Dict
[str
,ndarray
]) – One key for each WELL_NAME, pointing to a n by 3 numpy matrix with well coordinates (as outputted by roxapi)
- Return type:
DataFrame
- fmu.tools.rms.create_rft_ertobs.store_rft_as_points_inside_project(dframe, project, clipboard_folder)[source]
Store RFT observations for ERT as points in RMS under Clipboard. The points will be stored per zone, well, and year and will include useful attributes as pressure, wellname, date etc.
fmu.tools.rms.generate_bw_per_facies module
- fmu.tools.rms.generate_bw_per_facies.create_bw_per_facies(project, grid_name, bw_name, original_petro_log_names, facies_log_name, facies_code_names, debug_print=False)[source]
Function to be imported and applied in a RMS python job to create new petrophysical logs from original logs, but with one log per facies. All grid blocks for the blocked wells not belonging to the facies is set to undefined.
- Return type:
None
- Purpose:
Create the blocked well logs to be used to condition petrophysical realizations where all grid cells are assumed to belong to only one facies. This script will not modify any of the original logs, only create new logs where only petrophysical log values for one facies is selected and all other are set ot undefined.
- Input:
grid model name, blocked well set name, list of original log names to use, facies log name, facies code with facies name dictionary
- Output:
One new petro log per facies per petro variables in the input list of original log names. The output will be saved in the given blocked well set specified in the input.
fmu.tools.rms.generate_petro_jobs_for_field_update module
- Description:
Implementation of a function to create new petrosim jobs, one per facies given an existing petrosim job using facies realization as input.
Summary:
The current script is made to simplify the preparation step of the RMS project by creating the necessary petrosim jobs for a workflow supporting simultaneous update of both facies and petrophysical properties in ERT.
- fmu.tools.rms.generate_petro_jobs_for_field_update.main(config_file, debug=False, report_unused=False)[source]
Generate new RMS petrosim jobs from an original petrosim job.
Description :rtype:
None
This function can be used to generate new petrosim jobs in RMS from an existing petrosim job. The input (original) petrosim job is assumed to be a job for either a single zone or multi zone grid where the petrophysical properties are conditioned to an existing facies realization. The new generated petrosim jobs will not use facies realization as input, but assume that all grid cells belongs to the same facies. Hence, if the original petrosim job specify model parameters for petrophysical properties conditioned to a facies realization with N different facies, this script can generate up to N new petrosim jobs, one per facies the user wants to use in field parameter updating in ERT.
Input
An existing RMS project with a petrophysical job conditioning petrophysical properties on an existing facies realization.
A yaml format configuration file specifying necessary input to the function. This includes name of original petrosim job, grid model name and for each zone and each facies which petrophysical variables to use in the new petrosim jobs that are created. It is possible to not use all petrophysical variables in the original job and this can vary from facies to facies and zone to zone.
Output
A set of new petrosim jobs, one per facies that is specified in at least one of the zones. For a multi zone grid, each of the new petrosim jobs will contain specification of the petrophysical properties that is to be modelled for each zone for a given facies. The new jobs will get a name reflecting which facies it belongs to.
How to use the new petrosim jobs in the RMS workflow
When initial ensemble is created (ERT iteration = 0)
The RMS workflow should first use the original petrosim job to generate realizations of petrophysical properties as usual when not all of the petrophysical properties are going to be used as field parameters in RMS. If all petrophysical properties are going to be updated as field parameters in ERT, it is not necessary to do this step. Add all the new petrosim jobs (one per facies) into the workflow in RMS. Copy the petrophysical property realizations for each facies from the geomodel grid into the ERTBOX grid used for field parameter update in ERT. Export the petrophysical properties for each of the facies to ERT in ROFF format. Use a script that take as input the facies realization and the petrophysical properties per facies and use the facies realization as filter to select which values to copy from the petrophysical realizations to put into the petrophysical property parameter that was generated by the original job. This will overwrite the petrophysical properties in the realization from the original job for those parameters that are used as field parameters to be updated by ERT. The other petrophysical parameters generated by the original job but not used as field parameters in ERT will be untouched.
When updated ensemble is fetched from ERT (ERT iteration > 0):
Run the original petrosim job to generate all petrophysical parameters in the geogrid. Import the updated field parameters for petrophysical properties per facies into ERTBOX grid. Copy the petrophysical property parameters that were updated as field parameters by ERT into geomodel grid from the ERTBOX grid. This operation is the same as the last step when initial ensemble realization was created. This step will then overwrite the petrophysical property parameters already created by the original job by the new values updated by ERT. All petrophysical property parameters not updated as field parameters by ERT will be untouched and will therefore have the same values as they had after the original petrosim job was run.
Summary
The current function is made to simplify the preparation step of the RMS project by creating the necessary petrosim jobs for a workflow supporting simultaneous update of both facies and petrophysical properties in ERT.
Usage of this function
- Input
Name of config file with specification of which field parameters to simulate per zone per facies.
- Optional parameters
debug info (True/False), report field parameters not used (True/False)
- Output
A set of new petro sim jobs to appear in RMS project.
- fmu.tools.rms.generate_petro_jobs_for_field_update.read_specification_file(config_file_name, check=True)[source]
- Return type:
Dict
- fmu.tools.rms.generate_petro_jobs_for_field_update.check_specification(spec_dict)[source]
- Return type:
bool
- fmu.tools.rms.generate_petro_jobs_for_field_update.define_new_variable_names_and_correlation_matrix(orig_var_names, facies_name, new_var_names, orig_corr_matrix)[source]
- Return type:
Tuple
[List
[str
],List
[List
[float
]]]
- fmu.tools.rms.generate_petro_jobs_for_field_update.sort_new_var_names(original_variable_names, facies_name, new_variable_names)[source]
- Return type:
List
[str
]
- fmu.tools.rms.generate_petro_jobs_for_field_update.get_original_job_settings(owner_string_list, job_type, job_name)[source]
- Return type:
dict
- fmu.tools.rms.generate_petro_jobs_for_field_update.create_copy_of_job(owner_string_list, job_type, original_job_arguments, new_job_name)[source]
- fmu.tools.rms.generate_petro_jobs_for_field_update.get_zone_names_per_facies(used_petro_per_zone_per_facies_dict)[source]
- Return type:
Dict
- fmu.tools.rms.generate_petro_jobs_for_field_update.get_used_petro_names(used_petro_per_zone_per_facies_dict)[source]
- Return type:
List
[str
]
- fmu.tools.rms.generate_petro_jobs_for_field_update.set_new_var_name(facies_name, petro_name)[source]
- fmu.tools.rms.generate_petro_jobs_for_field_update.report_unused_fields(owner_string_list, job_name, used_petro_per_zone_per_facies_dict)[source]
- fmu.tools.rms.generate_petro_jobs_for_field_update.check_consistency(owner_string_list, job_name, used_petro_per_zone_per_facies_dict, report_unused=True)[source]
- Return type:
None
fmu.tools.rms.import_localmodules module
Import a local module made inside RMS, and refresh the content.
- fmu.tools.rms.import_localmodules.import_localmodule(project, module_root_name, path=None)[source]
Import a library module in RMS which exists either inside or outside RMS.
Inside a RMS project it can be beneficial to have a module that serves as a library, not only a front end script. Several problems exist in current RMS:
RMS has no awareness of this ‘PYTHONPATH’, i.e. <project>/pythoncomp
RMS will, once loaded, not refresh any changes made in the module
Python requires extension .py, but RMS often adds .py_1 for technical reasons, which makes it impossible for the end-user to understand why it will not work, as the ‘instance-name’ (script name inside RMS) and the actual file name will differ.
This function solves all these issues, and makes it possible to import a RMS project library in a much easier way:
import fmu.tools as tools # mylib.py is inside the RMS project plib = tools.rms.import_localmodule(project, "mylib") plib.somefunction(some_arg) # exlib.py is outside the RMS project e,g, at ../lib/exlib.py elib = tools.rms.import_localmodule(project, "exlib", path="../lib") elib.someotherfunction(some_other_arg)
- Parameters:
project – RMS ‘magic’ project variable
module_root_name – A string that is the root name of your module. E.g. if the module is named ‘blah.py’, the use ‘blah’.
path – If None then it will use a module seen from RMS, being technically stored in pythoncomp folder. If set, then it should be a string for the file path and load a module stored outside RMS.
fmu.tools.rms.qcreset module
- fmu.tools.rms.qcreset.set_data_constant(config)[source]
Set data from RMS constant.
This method is a utility in order to set surface and 3D grid property data to a given value. The value must be of the correct type (if discrete 3D property for example). The purpose of it is to make sure that those data are properly generated by the modelling workflow and not inherited from a previous run with the corresponding jobs deactivated. The data are set to a value triggering attention and not deleted in order not to reset some jobs in RMS.
The input of this method is a Python dictionary with defined keys. The keys “project” and “value” are required while “horizons”, “zones” and “grid_models” are optional (at least one of them should be provided for the method to have any effect).
- Parameters:
project – The roxar magic keyword
project
refering to the current RMS project.value – The constant value to assign to the data. It could be 0 or -999 for example. If discrete properties from grid models are modified, the value should be applicable (integer).
horizons – A Python dictionary where each key corresponds to the name of the horizons category where horizon data need to be modified. The value associated to this key should be a list of horizon names to modify. If a string
all
is assigned instead of a list, all available horizon names for this category will be used. Alternatively, if a list of horizons categories is given instead of a dictionary, the method will apply to all horizons within these horizons categories.zones – A Python dictionary where each key corresponds to the name of the zones category where zone data need to be modified. The value associated to this key should be a list of zone names to modify. If a string
all
is assigned instead of a list, all available zone names for this category will be used. Alternatively, if a list of zones categories is given instead of a dictionary, the method will apply to all zones within these zones categories.grid_models – A Python dictionary where each key corresponds to the name of the grid models where properties need to be modified. The value associated to this key should be a list of property names to modify. If a string
all
is assigned instead of a list, all available properties for this grid model name will be used. Alternatively, if a list of grid models names is given instead of a dictionary, the method will apply to all properties within these grid models.
- fmu.tools.rms.qcreset.set_data_empty(config)[source]
Set data from RMS empty.
This method is a utility in order to set empty surface and 3D grid property data. The value must be of the correct type (if discrete 3D property for example). The purpose of it is to make sure that those data are properly generated by the modelling workflow and not inherited from a previous run with the corresponding jobs deactivated.
The input of this method is a Python dictionary with defined keys. The keys “project” and “value” are required while “horizons”, “zones” and “grid_models” are optional (at least one of them should be provided for the method to have any effect).
Input configrations:
- project: The roxar magic keyword
project
refering to the current RMS project.
- horizons: A Python dictionary where each key corresponds to the name of
the horizons category where horizon data need to be made empty. The value associated to this key should be a list of horizon names to modify. If a string
all
is assigned instead of a list, all available horizon names for this category will be used. Alternatively, if a list of horizons categories is given instead of a dictionary, the method will apply to all horizons within these horizons categories.- zones: A Python dictionary where each key corresponds to the name of
the zones category where zone data need to be made empty. The value associated to this key should be a list of zone names to modify. If a string
all
is assigned instead of a list, all available zone names for this category will be used. Alternatively, if a list of zones categories is given instead of a dictionary, the method will apply to all zones within these zones categories.- grid_models: A Python dictionary where each key corresponds to the name
of the grid models where properties need to be made empty. The value associated to this key should be a list of property names to modify. If a string
all
is assigned instead of a list, all available properties for this grid model name will be used. Alternatively, if a list of grid models names is given instead of a dictionary, the method will apply to all properties within these grid models.
- Parameters:
config (
Dict
) – Configration as a dictionary. See examples in documentation
- project: The roxar magic keyword
fmu.tools.rms.rename_rms_scripts module
Fix RMS Python script file extensions and gather useful information
- class fmu.tools.rms.rename_rms_scripts.PythonCompMaster(path, write=True)[source]
Bases:
object
The PythonCompMaster class parses a .master specific to those found in an RMS pythoncomp/ directory. These .master files are structured as so:
Begin GEOMATIC header End GEOMATIC header Begin ParentParams object PSJParams object PSJParams object ... End ParentParams object
Each PSJParams object points to a Python script, and these objects are referenced in the root .master file if and when they are included in a workflow. PSJParams objects are stored like so:
Begin parameter id = PSJParams instance_name = script_name_in_rms.py elapsedrealtime = 2.5200000405311584e-01 elapsedcputime = 0.0000000000000000e+00 tableoffset = 0 description.size = 0 opentime = 2022-12-20 07:18:38:766 identifier = 000000…fdbea80000022f changeuser = msfe changetime = 2022-12-20 07:19:32:244 standalonefilename = script_name_in_rms.py_1 End parameter
where the
instance_name is the filename displayed in RMS,
standalonefilename is the filename as stored on disk,
- identifier is a 384-bit string that looks like a hash, but
frequently increments by one bit sequentially
The instance_name and standalonefilename can become out of sync, and the standalonefilename in particular can frequently be given a .py_1 extension rather than a .py extension.
This class offers methods to collect and correct these degenerate filenames.
- property parent: str
Path to the pythoncomp/ directory
- property path: str
Path to the pythoncomp/.master file
- property header: Dict[str, str]
The dict representing the GEOMATIC header of the .master file.
- property entries: Dict[str, Dict[str, str]]
The list of Python file entries
- get_inconsistent_entries()[source]
Inspects all Python entries for Python scripts that have an instance_name that differs from its standalonefilename, i.e. the RMS name does not match the name of the file on disk.
- Return type:
List
[str
]
- get_invalid_extensions()[source]
Inspects all Python entries for Python scripts that have a non-standard file extension (not .py) on disk. Frequently this means they are .py_1 but other variations exist (or occasionally there is no file extension at all).
- Return type:
List
[str
]
- get_invalid_instance_names()[source]
Inspects all Python entries for Python scripts that have a non-standard file extension (not .py) in RMS.
- Return type:
List
[str
]
- get_pep8_noncompliant()[source]
Returns a list of instance names that are not PEP8 compliant.
- Return type:
List
[str
]
- get_nonexistent_standalonefilenames()[source]
Inspects all Python entries for Python scripts that have a non-existent file. Assumes the path is up-to-date and correct.
- Return type:
List
[str
]
- get_unused_scripts()[source]
Returns a list of Python scripts that aren’t used in any workflow.
- Return type:
List
[str
]
- fix_standalone_filenames()[source]
Attempts to fix the Python files on disk that are inconsistent with the files in RMS. This fix is rather simple and just copies the instance_name to be the standalonefilename under the presumption that RMS will have prevented someone from making duplicate instance names. This might be an unreasonable assumption given the necessity of this script in the first place.
If the names in RMS do not have a Python extension we skip them rather than try to figure it out.
- Return type:
List
[str
]
fmu.tools.rms.volumetrics module
Module for handling volumetrics text files from RMS
- fmu.tools.rms.volumetrics.merge_rms_volumetrics(filebase, rmsrealsuffix='_1')[source]
Locate, parse and merge multiple volumetrics output files from RMS
Columns in parsed files will be renamed according to the hydrocarbon phase, which will be deduced from the filenames. Columns will be merged horizontally on common column names (typically Region, Zone, Facies, Licence boundary etc.)
- Parameters:
filebase (
str
) – Filename base, with absolute or relative path included, “<filebase>_oil_1.txt”, “<filebase>_gas_1.txt”, etc will be looked for.rmsrealsuffix (
str
) – String that will be used when searching for files. In a normal FMU context, this should always be kept as the default “_1”.
- Return type:
DataFrame
- fmu.tools.rms.volumetrics.rmsvolumetrics_txt2df(txtfile, columnrenamer=None, phase=None, outfile=None, regionrenamer=None, zonerenamer=None)[source]
Parse the volumetrics txt file from RMS as Pandas dataframe
Columns will be renamed according to FMU standard, https://wiki.equinor.com/wiki/index.php/FMU_standards
- Parameters:
txtfile (
Union
[Path
,str
]) – path to file emitted by RMS Volumetrics job. Can also be a Path object.columnrenamer (
Optional
[Dict
[str
,str
]]) – dictionary for renaming column. Will be merged with a default renaming dictionary (anything specified here will override any defaults)phase (
Optional
[str
]) – stating typically ‘GAS’, ‘OIL’ or ‘TOTAL’, signifying what kind of data is in the file. Will be appended to column names, and is guessed from filename if not provided.outfile (
Optional
[str
]) – filename to write CSV data to. If directory does not exist, it will be made.regionrenamer (
Optional
[Callable
[[str
],str
]]) – a function that when applied on strings, return a new string. If used, will be applied to every region value, using pandas.Series.apply()zonerenamer (
Optional
[Callable
[[str
],str
]]) – ditto for the zone column
- Return type:
DataFrame
The renamer functions could be defined like this:
def myregionrenamer(s): return s.replace('Equilibrium_region_', '')
or the same using a lambda expression.
- fmu.tools.rms.volumetrics.guess_phase(text)[source]
From a text-file, guess which phase the text file concerns, oil, gas or the “total” phase.
- Parameters:
text (str) – Multiline
- Returns:
“OIL”, “GAS” or “TOTAL”
- Return type:
str
- Raises:
ValueError if guessing fails. –
Module contents
Initialize modules for use in RMS
- fmu.tools.rms.rmsvolumetrics_txt2df(txtfile, columnrenamer=None, phase=None, outfile=None, regionrenamer=None, zonerenamer=None)[source]
Parse the volumetrics txt file from RMS as Pandas dataframe
Columns will be renamed according to FMU standard, https://wiki.equinor.com/wiki/index.php/FMU_standards
- Parameters:
txtfile (
Union
[Path
,str
]) – path to file emitted by RMS Volumetrics job. Can also be a Path object.columnrenamer (
Optional
[Dict
[str
,str
]]) – dictionary for renaming column. Will be merged with a default renaming dictionary (anything specified here will override any defaults)phase (
Optional
[str
]) – stating typically ‘GAS’, ‘OIL’ or ‘TOTAL’, signifying what kind of data is in the file. Will be appended to column names, and is guessed from filename if not provided.outfile (
Optional
[str
]) – filename to write CSV data to. If directory does not exist, it will be made.regionrenamer (
Optional
[Callable
[[str
],str
]]) – a function that when applied on strings, return a new string. If used, will be applied to every region value, using pandas.Series.apply()zonerenamer (
Optional
[Callable
[[str
],str
]]) – ditto for the zone column
- Return type:
DataFrame
The renamer functions could be defined like this:
def myregionrenamer(s): return s.replace('Equilibrium_region_', '')
or the same using a lambda expression.
- fmu.tools.rms.import_localmodule(project, module_root_name, path=None)[source]
Import a library module in RMS which exists either inside or outside RMS.
Inside a RMS project it can be beneficial to have a module that serves as a library, not only a front end script. Several problems exist in current RMS:
RMS has no awareness of this ‘PYTHONPATH’, i.e. <project>/pythoncomp
RMS will, once loaded, not refresh any changes made in the module
Python requires extension .py, but RMS often adds .py_1 for technical reasons, which makes it impossible for the end-user to understand why it will not work, as the ‘instance-name’ (script name inside RMS) and the actual file name will differ.
This function solves all these issues, and makes it possible to import a RMS project library in a much easier way:
import fmu.tools as tools # mylib.py is inside the RMS project plib = tools.rms.import_localmodule(project, "mylib") plib.somefunction(some_arg) # exlib.py is outside the RMS project e,g, at ../lib/exlib.py elib = tools.rms.import_localmodule(project, "exlib", path="../lib") elib.someotherfunction(some_other_arg)
- Parameters:
project – RMS ‘magic’ project variable
module_root_name – A string that is the root name of your module. E.g. if the module is named ‘blah.py’, the use ‘blah’.
path – If None then it will use a module seen from RMS, being technically stored in pythoncomp folder. If set, then it should be a string for the file path and load a module stored outside RMS.
- fmu.tools.rms.generate_petro_jobs(config_file, debug=False, report_unused=False)
Generate new RMS petrosim jobs from an original petrosim job.
Description :rtype:
None
This function can be used to generate new petrosim jobs in RMS from an existing petrosim job. The input (original) petrosim job is assumed to be a job for either a single zone or multi zone grid where the petrophysical properties are conditioned to an existing facies realization. The new generated petrosim jobs will not use facies realization as input, but assume that all grid cells belongs to the same facies. Hence, if the original petrosim job specify model parameters for petrophysical properties conditioned to a facies realization with N different facies, this script can generate up to N new petrosim jobs, one per facies the user wants to use in field parameter updating in ERT.
Input
An existing RMS project with a petrophysical job conditioning petrophysical properties on an existing facies realization.
A yaml format configuration file specifying necessary input to the function. This includes name of original petrosim job, grid model name and for each zone and each facies which petrophysical variables to use in the new petrosim jobs that are created. It is possible to not use all petrophysical variables in the original job and this can vary from facies to facies and zone to zone.
Output
A set of new petrosim jobs, one per facies that is specified in at least one of the zones. For a multi zone grid, each of the new petrosim jobs will contain specification of the petrophysical properties that is to be modelled for each zone for a given facies. The new jobs will get a name reflecting which facies it belongs to.
How to use the new petrosim jobs in the RMS workflow
When initial ensemble is created (ERT iteration = 0)
The RMS workflow should first use the original petrosim job to generate realizations of petrophysical properties as usual when not all of the petrophysical properties are going to be used as field parameters in RMS. If all petrophysical properties are going to be updated as field parameters in ERT, it is not necessary to do this step. Add all the new petrosim jobs (one per facies) into the workflow in RMS. Copy the petrophysical property realizations for each facies from the geomodel grid into the ERTBOX grid used for field parameter update in ERT. Export the petrophysical properties for each of the facies to ERT in ROFF format. Use a script that take as input the facies realization and the petrophysical properties per facies and use the facies realization as filter to select which values to copy from the petrophysical realizations to put into the petrophysical property parameter that was generated by the original job. This will overwrite the petrophysical properties in the realization from the original job for those parameters that are used as field parameters to be updated by ERT. The other petrophysical parameters generated by the original job but not used as field parameters in ERT will be untouched.
When updated ensemble is fetched from ERT (ERT iteration > 0):
Run the original petrosim job to generate all petrophysical parameters in the geogrid. Import the updated field parameters for petrophysical properties per facies into ERTBOX grid. Copy the petrophysical property parameters that were updated as field parameters by ERT into geomodel grid from the ERTBOX grid. This operation is the same as the last step when initial ensemble realization was created. This step will then overwrite the petrophysical property parameters already created by the original job by the new values updated by ERT. All petrophysical property parameters not updated as field parameters by ERT will be untouched and will therefore have the same values as they had after the original petrosim job was run.
Summary
The current function is made to simplify the preparation step of the RMS project by creating the necessary petrosim jobs for a workflow supporting simultaneous update of both facies and petrophysical properties in ERT.
Usage of this function
- Input
Name of config file with specification of which field parameters to simulate per zone per facies.
- Optional parameters
debug info (True/False), report field parameters not used (True/False)
- Output
A set of new petro sim jobs to appear in RMS project.
- fmu.tools.rms.create_bw_per_facies(project, grid_name, bw_name, original_petro_log_names, facies_log_name, facies_code_names, debug_print=False)[source]
Function to be imported and applied in a RMS python job to create new petrophysical logs from original logs, but with one log per facies. All grid blocks for the blocked wells not belonging to the facies is set to undefined.
- Return type:
None
- Purpose:
Create the blocked well logs to be used to condition petrophysical realizations where all grid cells are assumed to belong to only one facies. This script will not modify any of the original logs, only create new logs where only petrophysical log values for one facies is selected and all other are set ot undefined.
- Input:
grid model name, blocked well set name, list of original log names to use, facies log name, facies code with facies name dictionary
- Output:
One new petro log per facies per petro variables in the input list of original log names. The output will be saved in the given blocked well set specified in the input.