| Bookmark Name | Actions |
|---|
RDBMS Conversion
This section provides details for the general conversion of Temenos Transact Hash files to an RDBMS implementation. The procedure allows the conversion of both Temenos Transact dictionary and data files and the generation of VOC TABLE entries in place of stub files.
STUBFILES
Although STUBFILE files have been deployed as the mechanism to access RDBMS database tables and other types of databases successfully, they cause a complication to the configuration, maintenance and various deployments of Temenos Transact and the associated RDBMS database.
The STUBFILE files must reside on a physical file partition, which is shared usually over the Multiple Application Servers executing the Temenos Transact processes. The information contained within the STUBFILE files must be preserved and maintained over and above the actual data in the database. The STUBFILE files must always be aligned correctly with the database so that they correctly reflect the tables and their types, which are created or deleted in the database.
It is possible to configure VOC as a RDBMS table and hold STUBFILE information directly in the VOC. The VOC as an RDBMS table means that the STUBFILE information is now preserved directly in the associated RDBMS database. In addition, the application lock mechanisms are enhanced to be independent of the physical STUBFILE files.
The above modifications indicate that the STUBFILE files are no longer necessary and can be removed. Similarly, the procedures required to ensure that the STUBFILE files are correctly aligned and maintained with the RDBMS database are no longer necessary.
The format of the previous STUBFILE information as represented in a VOC TABLE entry is described in Appendix A.
After the VOC entries have been converted from STUBFILE file references to TABLE entries, any subsequent tables created using Temenos Transact (EBS.CREATE.FILE) will automatically generate a VOC TABLE entry rather than a STUBFILE F pointer reference.
Company Schemas
The upgraded versions of the RDBMS XML Database Drivers provide additional command line qualifiers to enable the table to be created using a separate database schema definition. In addition to the specification of an alternative database schema it is also possible to specify the data and index table spaces for the related database schema.
This database schema and table space qualifiers can now be used to separate and organise the RDBMS database on a Company name basis. For example, the FBNK.ACCOUNT table can be created in an FBNK database schema using the following create file command line.
CREATE.FILE DATA FBNK.ACCOUNT TYPE=XMLORACLE TABLE=ACCOUNT SCHEMA=FBNK
This command will create an ACCOUNT table under the FBNK database schema name in the default table space of T24DATA and primary key index in T24INDEX table space.
The following additional qualifiers to the CREATE.FILE command are provided to override the default table space specification:
DATATABLESPACE=FBNKDATA INDEXTABLESPACE=FBNKINDEX
The above qualifiers would direct the database to create the ACCOUNT table and index in the FBNKDATA and FBNKINDEX table spaces, respectively. The Company Schema conversion process is one of a suite of conversions that is provided by the RBDMS Conversion package, which must be installed and configured in the environment run directory.
Configuration for Conversion
The Temenos Transact RDBMS conversion package is supplied as a gzip tar archive. You need to restore the contents of the zip file to the bnk.run directory of Temenos Transact, which will create a rdbmsPrograms subdirectory.
$ gunzip –c rdbmsPrograms_VERSION.PLATFORM.tar.gz | tar –xvf –
You need to export the following in .profile on the Temenos Transact bnk.run directory, save the file, exit and re-login.
-
JBASE_I18N=1
-
JBASE_CODEPAGE=utf8
-
JBASE_LOCAL=en_US
You need to sign into Temenos Transact and log out with BK to return to the jsh prompt and execute Temenos Transact CONVERT.SYSTEM.UTF8.JBASE from the jsh prompt and select Y when prompted for conversion. For example, jsh àCONVERT.SYSTEM.UTF8.JBASE.
You need to logout and login again, to make sure you can still sign on to Temenos Transact
To enable conversion, edit the .profile to include the following variable, exit and re-login.
export PATH= $PATH;$HOME/rdbmsPrograms
If the multi-company option is enabled with or without separate company tablespaces, where each company has its own Oracle schema, the Oracle schemas and tablespaces for each company must have been created in the Oracle Database prior to running the conversion program.
The RDBMS_COMPANY_SCHEMA=1 environment variable should be set in the Temenos Transact .profile to direct the RDBMS conversion to use the Company Name as the Database Schema name.
If separate tablespaces for each company’s data are chosen, the RDBMS_USE_COMPANY_TABLESPACE=1 environment variable should be set in the Temenos Transact .profile, to direct the RDBMS conversion to create the tables for each company in their respective tablespaces.
These options must be set prior to executing the conversion program.
The conversion process should be run from a Temenos Transact Application Server only. You need to execute rdbmsConversion from the o/s prompt from the Temenos Transact bnk.run directory. The menu with options will appear on the screen as shown in the following illustration.
The number of agents can be between 2 and 8 required for the process and the database must be XMLORACLE. You must accept the default for converting file data in bnk.data. Set the other fields to Y as shown in the above image.
You can also compare the new database with original at a later date and choose not to convert the stub files.
The first phase of the conversion process will scan and analyse the Temenos Transact VOC file (Temenos Transact master file), creating a new copy of the VOC (./rdbmsVOC) in the run directory.
The conversion process will also create several work files in the ./rdbmsConversion conversion directory. These work files are used to do the following.
-
Control the conversion process
-
Report details related to the VOC analysis and conversion problems, if any.
The RDBMS conversion process effectively duplicates the Temenos Transact directories, creating new directories, leaving the original files and directories intact. These directories are renamed with rdbms. For example, bnk.rdbmsdata.
After all the required sections of conversion are complete, the main conversion process will either prompt the operator to continue if any errors have been detected, else continue the process to completion.
The final stage of the process is to run the rdbmsConversion/rdbmsMove.ksh script from the o/s prompt to reconfigure the environment after conversion is completed, which will preserve originals and replace the converted directories. Before attempting to use the converted VOC, you need to exit the process and login again.
After the conversion process has completed successfully and you have tested the access to the Oracle database through Temenos Transact, you need to backup both the Temenos Transact environment and Oracle database.
The rdbms conversion process can be interrupted at any time by using the Q key. You can restart the process at a later time. In that case, you will be prompted to rescan the VOC. This option will reprocess the original VOC file. However the ./rdbmsVOC file will not be recreated. If the option is not selected the process will continue with the current VOC information.
In case you want to revert to the original environment that is, prior to the conversion process was started, the rdbmsConversion/rdbmsRevert.ksh script should be run from the o/s prompt.
Conversion Components
The RDBMS Conversion process is intended primarily to convert an existing Temenos Transact J4 hash file implementation into an RDBMS Conversion. Unlike The RDBMS conversion suite of programs provides additional menu options for conversion of dictionaries, movement and removal of stub files into VOC table entries. An additional override also enables the implementation of Company Schema during the conversion process.
The following menu options can be selected during RDBMS Conversion.
Convert ‘Data’ Files (bnk.data) to RDBMS Convert ‘Dict’ Files (bnk.dict) to RDBMS (Optional) Convert ‘Other’ Files (bnk.jnl/help/arc) to RDBMS (Optional) Convert Stub Files to VOC Table Entries (Optional)
The RDBMS Conversion assumes the following Temenos Transact directory structure.
bnk/bnk.data - Contains Transact data files
bnk/bnk.dict - Contains Transact dict files
bnk/bnk.jnl - Contains Transact journal files
bnk/bnk.arc - Contains Transact archive files
Only files and/or tables in these directories are processed along with VOC. The VOC is used as the master for all files and/or tables. If a file and/or table exists without an associated VOC entry, it will not be processed.
The RDBMS conversion process effectively duplicates these directories, creating new stub files and directories leaving the original files and/or tables and directories intact. The conversion creates corresponding sub-directories for the directories converted renamed with rdbms.
Example
bnk/bnk.data - bnk/bnk.rdbmsdata
bnk/bnk.dict - bnk/bnk.rdbmsdict
After conversion, the scripts are used to toggle and rename the directories to enable the newly generated directories and original directories are renamed with eld.
bnk/bnk.data - bnk/bnk.elddata
bnk/bnk.dict - bnk/bnk.elddict
Data Files are created as either NOXMLSCHEMA (BLOB) or XMLTYPE depending upon the file type specified in the Temenos Transact F.PGM.FILE entry.
Types ‘DHLTUW’ are converted to XMLTYPE
JOB.LIST files are created as NOXMLSCHEMA=WORK.
The drivers can use the WORK identifier to determine the appropriate RDBMS table type to use for work files. For example, JOB.LIST file entries can be limited to ensure they do not contain records over a specified size.
By default, RDBMS attempts to use unique VOC name for table name. If table name exceeds 25 characters(Oracle), the unique table name is created with unique sequence number.
For example, F.SC.REPORT.TYPE would have been created as SCF_SC_REPO000 where sc was subdirectory and 000 was incremented number.
If this option is selected, then Dictionary files will also be converted to RDBMS tables. The VOC name is used with prefixed with D_. For example, the Dictionary Table DICT F.ACCOUNT is created as table D_F_ACCOUNT. If stub files are still used, they will be suffixed with ]D. For example, F.ACCOUNT]D. Dictionary tables are created as NOXMLSCHEMA (Binary) tables.
The Other files (bnk.jnl/help/arc) are also converted to RDBMS tables similar to the data file conversion. Any associated dictionary files will be generated in the bnk/bnk.rdbmsdict directory.
Files configured as Directories are not converted. However, the directories and contents will be duplicated to the new RDBMS directory structure.
Data verification is done by default in the conversion process. As records are written to the tables, they are read back and compared with the original. The errors (if any) are reported in rdbmsConversion/Errors.
Data comparison with the original files can also be selected as an option for the conversion process, prior to comparison. In this case, the original data will be processed as in the default verification.
As records are written to the tables, they are read back and compared with the original and errors (if any) are reported in rdbmsConversion/Errors. Records in rdbmsConversion/Errors containing invalid characters must be corrected manually and conversion is re-run.
The records are checked for data anomalies like containing unexpected characters (0x00-0x1f) or null key records. Null key records are skipped and records with unexpected characters (0x00-0x1f) have the characters stripped out prior to placement in the RDBMS tables. All anomalies are reported in the rdbmsConversion/DataErrors file.
Data Comparison
Data comparison with the original files can also be selected as an option for the conversion process. The original data will be processed as part of default verification prior to comparison.
If this override option is configured prior to the RDBMS conversion, the Company tables will be generated using the COMPANY name (that is, FBNK) as the database schema. The conversion process will determine the files to be generated for the company schemas during the VOC scan process. The database schemas and associated tables spaces must be configured for all companies prior to the conversion.
If Company schema is configured, the company name will be used for the database schema, thus allowing additional characters for table name. For example, the VOC entry FBNK.ACCOUNT gets the table name ACCOUNT and it is FBNK in database schema.
Hence, FBNK.TELLER.DENOMINATION$HIS instead of referencing table TTFBNK_TEL011 could be created as table TELLER_DENOMINATION#HIS in database schema FBNK.
The STUBFILE files were used Temenos Transact RDBMS implementations as a mechanism to redirect the file and/or table open request to the required database driver. The STUBFILE contains specific information relevant to both the Shared Object Library Loader and JEDI Database Driver.
For example, Description of a STUBFILE for ../bnk.data/ac/FBNK.ACCOUNT is as follows.
FBNK.ACCOUNT JBC__SOB XMLORACLEInit acFBNK_ACCOUNT
In the above example, JBC__SOB and XMLORACLEInit strings indicates that the runtime needs to locate a Shared OBject library containing the library initialisation function, XMLORACLEInit.
The runtime searches the file paths in the JBCOBJECTLIST and configured OS Library Path to locate the export file containing the required function. The library is then loaded and initialisation function is invoked. Subsequently, the driver open function is invoked with the rest of the arguments specified in the STUBFILE. The additional arguments in the STUBFILE can be specific to each database driver and are used to describe the name and type of the target file or RDBMS table.
In this example, the XMLORACLE database driver will try to access the RDBMS table name of acFBNK_ACCOUNT as type XML in the default database schema of the configured ORACLE database.
The STUBFILE can contain several additional arguments, which further define the table configuration to the database driver. The STUBFILE contents should never be modified manually. For the conversion process, the STUBFILE information is extracted and placed directly in VOC as VOC TABLE entries.
The STUBFILE conversion process is one of a suite of conversions that is provided by the RBDMS Conversion package, which must be installed and configured in the environment run directory.
Conversion Process
The aim of the full conversion process is to create a clean copy of the existing VOC as a RDBMS table in the database and convert the existing VOC F pointers usually used to refer the existing hash files or STUBFILE files, to VOC TABLE entries, which will contain the STUBFILE information directly.
The first phase of the conversion process will scan and analyse VOC, creating a new copy of the VOC (./rdbmsVOC) in the run directory. If the stub file conversion is selected, the rdbmsVOC will be created as a TABLE in the target RDBMS.
The conversion process will also create several work files in the ./rdbmsConversion conversion directory. These work files are used to control the conversion process and report details regarding the VOC analysis and conversion problems.
Each entry in VOC is processed as one of the following types.
A deprecated entry is a VOC entry type, which is no longer required. These entries will not be copied to the new VOC.
An invalid file indicates that the files specified in the F pointer do not exist and cannot be referenced. These entries will be reported in the InvalidFiles work file but will not be copied to the new VOC. These entries need to be reviewed prior to the final conversion to determine if they are still required.
A duplicate file indicates that a reference to the file is found in one or more VOC entries. These entries will be reported in the DuplicateFiles work file and they will not be copied to the new VOC. These entries must be resolved prior to the conversion process.
The VOC vocabulary entries will be copied to the new VOC (./rdbmsVOC).
Each file pointer will be processed to determine the file paths of the file references. If the data file path references the data, dict, then a conversion entry will be generated in the respective DataFiles, DictTables work files of the conversion directory (./rdbmsConversion). If the file pointer references a file in another user directory then the file pointer is simply copied to the new VOC (./rdbmsVOC) but is not considered for conversion.
All current VOC entries are also preserved in a work file (eldVOC) in the rdbmsConversion directory. After analysis of the VOC is complete, the operator will be prompted to continue with the conversion. The next phase of the conversion process will then process each of the entries generated in the DataFiles and DictFiles work files.
The background processes working on behalf of the main conversion process handles the work of the Company Schema conversion. These background processes are started for each phase of the conversion, which are monitored, summarised and controlled by the main conversion process. The number of processes required for background processing can be specified before the conversion process begins.
The background process locks an entry in the work file (DataFiles, DictFiles or OtherFiles) depending upon the current conversion processing point. The background process then analyses the entry, creates the table in the normal way, retrieves the STUBFILE file information and updates the new VOC (./rdbmsVOC) accordingly.
All converted VOC entries will be recorded in the ConvertedVoc work file in the rdbmsConversion directory along with the associated STUBFILE file information in the StubFiles work file.
After all required sections of the conversion are completed, the main conversion process will either prompt the operator to continue in case of errors or continue the process and remove the original STUBFILE files from the original directory structure. VOC errors are reported in the VocErrors work file in the rdbmsConversion directory. If this step is not selected at the time of conversion, it can be performed later separately.
Finally, the new VOC (./rdbmsVOC) and directory structures (../bnk.rdbmsdata, ../bnk.rdbmsdict, and so on) can be enabled by the rdbmsConversion/rdbmsMove.ksh script. This script moves the original VOC to ./eldVOC and new VOC (./rdbmsVOC) to VOC, similar to the converted directory structures. The existing data directory (../bnk.data) is renamed first (../bnk.elddata) and new directory (../bnk.rdbmsdata) is renamed to replace the original (../bnk.data). Similarly, the existing dictionary directory (../bnk.dict) is renamed to (../bnk.elddict) and the new dictionary directory (../bnk.rdbmsdict) is renamed to replace the original dictionary directory (../bnk.dict).
Before attempting to use the converted VOC, exit the process and re-login.
To revert to the original VOC, you can use the rdbmsConversion/rdbmsRevert.ksh script, which does the following.
-
Moves the new VOC back to ./rdbmsVOC
-
Moves the ./eldVOC back to VOC
-
Renames the ../bnk.elddata and ../bnk.elddict directories back to the rdbmsdata and rdbmsdict.
The rdbmsRestoreStubs program can be used to reconstruct the original STUBFILE files, if required.
The Conversion suite has now been enhanced to include multithreaded conversion using sub agents to speed up the processing of large files. Initially, one agent handled one file, so huge files result in an extended time to complete the conversion. This enhancement now splits the file into groups of record IDs, which can be allocated to multiple sub agents to convert in parallel. The numbers of sub agents, size of file, and batch size of records can all be configured with environment variables. After any environment changes have been made, rdbmsConversion is run in exactly the same way as before.
To start the process from scratch, you need to delete the rdbmsConversion directory before you start.
The following environment variable enables multithreading table conversion through sub agents.
RDBMS_USE_MULTITHREAD_AGENT=1 Enable multithreading subagents
The following are the default settings unless overridden by setting environment variables.
RDBMS_MULTITHREAD_THRESHOLD 100000 RDBMS_MULTITHREAD_BATCHSIZE 1000 RDBMS_MULTITHREAD_MIN_SUBAGENTS 2 RDBMS_MULTITHREAD_MAX_SUBAGENTS 10
The following table lists the environment variable overrides.
|
Override |
Description |
|---|---|
|
RDBMS_MULTITHREAD_THRESHOLD=n |
Overrides file size in records when sub agents are invoked |
|
RDBMS_MULTITHREAD_MIN_SUBAGENTS=n |
Overrides minimum number of sub agents |
|
RDBMS_MULTITHREAD_MAX_SUBAGENTS=n |
Overrides maximum number of sub agents |
|
RDBMS_MULTITHREAD_BATCHSIZE=n |
Overrides number of keys per batch file |
Support has also been added to specify exceptions for particular files. These are records added to the VOC file, which list the names of these files. The following table lists the VOC IDs involved.
|
VOC ID |
Description |
|---|---|
|
CODEPAGE_EXEMPTFILES |
Entry in VOC containing list of files to skip any code page conversion (F.SPF.CHECK) |
|
NOXMLFILES |
Entry in VOC containing list of files normally of type BLOB NOT to convert to XML. RDBMS_USE_NOXMLFILES=1 must be set to invoke this ID. |
|
XMLFILES |
Entry in VOC containing list of files normally of type BLOB to create as XML. RDBMS_USE_XMLFILES=1 must be set to invoke this ID. |
|
WORKFILES |
Entry in VOC containing list of files normally of type BLOB to create as WORK RDBMS_USE_WORKFILES=1 must be set to invoke this ID. |
Execute Conversion
Execute rdbmsConversion (not in jsh)
The following are the available override options.
This environment variable can be set to suppress the automated removal of the original STUBFILE files, prior to the execution of the conversion program.
This environment variable can be set to direct the RDBMS conversion to use the Company Name as the Database schema name. You need to create the database schemas in the database prior to conversion.
This environment variable can be set to direct the creation of Company tables to use the table space qualifiers so that the new Company tables are created in their respective table spaces. This option must be set prior to executing the conversion, in the rdbmsCompanySchema script.
This environment variable will override attempting to use the VOC name for the table and use the previous 12 character file and/or table naming convention.
This environment variable controls the creation of NOXMLSCHEMA tables, whereby only tables listed in the NOXMLFILES VOC entry will be created as BLOB type. Otherwise, all tables, which would normally be created as BLOB type, are created as XML type. This environment variable is used for XMLDB2 conversions to ensure that as many tables as possible are created as XML type rather than BLOB, as BLOB columns are not cached in DB2 and performance can be adversely effected when BLOB type is used for workfiles and so on.
The current list of files, which should be added to the NOXMLFILES entry in the VOC, are as follows.
-
F.DE.PHANTOM
-
F.DE.HANDOFF
-
F.ENQUIRY.LEVEL
-
F.OS.COPY
-
F.OS.XML.CACHE
If a file cannot be converted due to illegal xml characters and record cannot be cleaned up to avoid these characters, then the file should be added to the NOXMLFILES record.
This environment variable will force a UTF-8 conversion of the data from the source codepage (default source page is latin1) to UTF-8.
This should not be used in the normal course of events. Invalid characters should be identified and corrected or removed.
This environment variable is active only when RDBMS_FORCE_UTF8_CONVERSION is set. It allows you to specify the source codepage to use when converting data to UTF-8. This variable should be set when converting from codepages other than latin1 (or UTF-8) to ensure the data is encoded correctly.
This environment variable will allow records containing Tabs, carriage returns and line feeds to be copied to the RDBMS database without error. Normally these records are placed in DataErrors or DictErrors file awaiting manual correction. There may be a requirement in some local installation to allow these characters through without error, that is tab delimited data.
Points to Remember
The RDBMS Conversion package logs output details for the main conversion process as well as the background processes in the &COMO& directory. If the RDBMS conversion procedure is re-executed, the process may prompt to rescan the VOC. If this option is chosen, it will reprocess the original VOC file but the ./rdbmsVOC file will not be recreated.
The initial creation of./rdbmsVOC will be of the data type configured already for the F.PGM.TABLE table. The RDBMS conversion can be quit at any time by depressing the Q key. If the original VOC is rendered unusable, a copy of the original VOC is held in the rdbmsConversion directory as eldVOC.
The RDBMS conversion can be reset by executing the rdbmsReset.ksh or rdbmsReset.bat script. The rdbmsMove.ksh script will cause reversion of any toggles of VOC and removes all related files and directories related to the RDBMS conversion. However, any STUBFILE files already removed will not be reinstated.
After the conversion is determined to be complete and new VOC tested, the RDBMS conversion work areas can be removed using rdbmsRemove.ksh or rdbmsRemove.bat.
Most problems are usually related to the original errors in the VOC entries and all InvalidFiles and DuplicateFiles should be resolved prior to the final conversion. Also, you need to ensure that VOC entries are well formed and verify the distributed files (if any) to ensure VOC Part File entries reference unique files, and so on.
EBS.CREATE.FILE
The Temenos Transact EBS.CREATE.FILE module is modified to automatically check F.SPF and detect if conversion has taken place.
VOC Name for table names - F_ACCOUNT
VOC Table Entries - TABLE^XMLORACLEInit^dict^data
Check for XSD Schemas - XSDSCHEMA=ACCOUNT
JOB.LIST files - NOXMLSCHEMA=WORK
In addition, additional company fields are added to determine whether to use company name for database schema on table creation.
Company Schema - YES
TABLE=ACCOUNT
SCHEMA=FBNK
Company Schema Table spaces - YES
DATATABLESPACE=FBNKDATA
INDEXTABLESPACE=FBNKINDEX
The EBS.CREATE.FILE module and associated CD must be applied post RDBMS conversion and prior to the use of Temenos Transact. In addition, the following VOC table entry should be configured for the following.
-
To provide select access to the STUBFILES table
-
To enable EB.CREATE.FILE to determine whether the table name is unique prior to creation
For example, the following configuration is given for STUBVIEW.
001 TABLE - VOC TABLE Entry Identifier 002 XMLORACLEInit - Database Driver Initialisation Function 003 004 V_STUBVIEW - Data table arguments for database driver
You also need to create a view (V_STUBVIEW) on the STUBFILES table in the RDBMS using the following command.
create view V_STUBVIEW (RECID) AS SELECT ID FROM STUBFILES
Lock Considerations
The physical STUBFILE files were also used to take and propagate application level record locks. The unique Inode and device numbers from the physical STUBFILE files were used as unique identification of the associated table.
The removal of the STUBFILE files indicates that that the Inode and device number are now no longer available to be used with the relevant lock mechanism. The VOC TABLE definitions are used in place of the STUBFILE file to create pseudo Inode and device numbers for use by the lock mechanisms.
The pseudo Inode is generated using the Hash value of the table name specified in the VOC TABLE definition. The device number is derived from either the Hash value of the Driver Initialisation Function (XMLORACLEInit) or string value of the JBASE_DATABASE environment variable, if configured.
In case of deployment using the OS lock mechanism, the OS locks will be taken on abstracted files created specifically for record locks. The abstracted files will be created by default in a /tmp/jbase directory, which can be overridden by setting JBASE_UNIX_LOCKDIR. The entries in the directory take are in the jlock_IIIIIIII_DDDDDDDD format, in which IIIIIIII and DDDDDDDD are the hexadecimal representation of the Inode and device, respectively.
In case of a Multiple Application Server configuration using OS locks, the lock directory (/tmp/jbase) should be shared over a networked file system to propagate the OS locks correctly among servers.
Add Bookmark
save your best linksView Bookmarks
Visit your best linksIn this topic
Are you sure you want to log-off?