This page explains the mechanism provided by Farrago for upgrading an existing catalog when the catalog repository metamodel changes in a new release. The mechanism assumes that only additive changes have been made to the metamodel; support for other types of changes requires a mechanism for running XSL on the catalog XMI, and is not yet available. For information on which kinds of changes are additive, please see CatalogUml.
Farrago uses a metamodel timestamp to detect incompatibilities between a stored catalog and the code-generated Java classes being used to access it. Incompatibility is detected on repository startup, and causes any attempt to load the database to fail. This can happen when the migration procedure below is not carried out after catalog changes (e.g. in a developer build, when the developer skips ant createCatalog after making or syncing FEM changes). The timestamp itself is updated in FarragoExtMetamodel.uml by ArgoUML whenever that file is edited. The Farrago build extracts this timestamp and writes it as a string literal into FarragoMetadataFactoryImpl.getCompiledModelTimestamp during ant createCatalog. Later, Farrago saves this timestamp into the stored metamodel in the repository (as the annotation on FarragoPackage) the first time the new unstamped catalog is loaded; subsequent load attempts compare the two timestamps (one from FarragoMetadataFactoryImpl, one from the stored metamodel).
(If you are integrating changes to FEM across Perforce branches, and you run into a conflict where FEM was changed on both branches, you should make up a new timestamp while editing as part of resolving the merge--don't use one of the existing timestamps, since none of them corresponds to the combined metamodel.)
The upgrade procedure is as follows:
- Export the complete catalog as an XMI file to a backup location. This is accomplished via the following SQL statement:
- CALL sys_boot.mgmt.export_catalog_xmi('/path/to/backup/FarragoCatalogDump.xmi');
- Shut down Farrago.
- Copy *.dat from Farrago catalog directory to backup location. This offline backup is to preserve physical data such as stored tables. (If Farrago is being used for a system without persistence, e.g. as a SQL/MED middleware server, this step may not be necessary.)
- Upgrade software, temporarily reverting to a clean catalog repository and physical database.
- Start Farrago and issue the following SQL statement:
- ALTER SYSTEM REPLACE CATALOG;
- The statement above will prepare the system to receive the old catalog copy on next restart. It creates two files in the Farrago catalog directory: FarragoMetamodelDump.xmi and FarragoCatalogDump.xmi. As a side-effect, it will also wipe out the metamodel and catalog extent and shut down the system, so no further SQL calls can be made on the connection which issued the ALTER SYSTEM command.
- Copy *.dat from backup location to the Farrago catalog directory, being careful not to overwrite the FarragoMetamodelDump.xmi file.
- Copy FarragoCatalogDump.xmi from the previously exported copy to the Farrago catalog directory; make sure that this does overwrite the new copy dumped by ALTER SYSTEM REPLACE CATALOG, otherwise the operation will have no effect.
- Restart Farrago. The system should automatically import the combination of the new metamodel and old catalog, upgrading all old objects as they are imported. After restart, from SQL metadata queries the catalog should look exactly as it did before upgrade.
- If the upgrade included changes to Farrago's initsql scripts, the effect of these changes will have been lost by the catalog restoration process. In this case, the scripts must be re-run (meaning they must be capable of dealing with existing system objects rather than failing; see FarragoCatalogChangeRules).
To test the procedure above, we piggyback onto the existing Farrago ant target testRngPlugin. The RNG plugin test "upgrades" the catalog by adding a new extension model, so we use that to simulate a normal additive upgrade. Before the plugin installation, the test creates some objects, exports the catalog, and backs up the physical database. After plugin installation and verification, the test issues ALTER SYSTEM REPLACE CATALOG, imports the old catalog, and then verifies that the old objects still exist and that the new plugin extension model is still working.
The unit test above verifies the functionality of ALTER SYSTEM REPLACE CATALOG, but it does not in any way ensure that specific model changes haven't broken upgrades due to accidental introduction of non-additive changes. For that, we will need a test that does the following:
- As test data, check a pre-populated catalog XMI dump into source control. This should contain as many different object types as possible to get good coverage. The script which creates these objects should also be checked in.
- During tests, carry out the upgrade procedure, importing the checked-in catalog XMI dump.
- After upgrade, verify that the definitions for all of the pre-populated objects can be read via catalog views, and that they can be used in queries, dropped, etc.
This regression test does not currently exist. Once it does, it will need to be maintained; in other words, whenever new features are added which require catalog changes, the population script should be updated to exercise them, the checked-in catalog XMI dump should be regenerated, and the verification test should be updated to access them. Note: To avoid platform-specifics, it would be a good idea to avoid checking a physical database into source control. That means the regression test may have to play some games to exercise pure Java metadata only. Or, take a different approach: instead of checking in the old catalog dump, check the old metamodel into source control, make the test take Farrago "back in time" to initialize the catalog using the old metamodel; populate the objects and dump+backup; restore the latest metamodel; replace the catalog using the dump; and then verify as above. This is probably cleaner.