Automating the the tedious and error prone DBA work involved with moving developers changes to test and production. Change Controls need to be done in a controlled and accurate process, but they do not need to make the developers wait, or occupy the DBA's time with mundane task.
The apply_pending.sh script and it's supporting scripts are completely eliminate the DBA form the majority of this work while keeping you informed by email. If there is a problem you will know about it right away from the subject line of the email and you can simply scroll down to see the details.
The normal corporate environment gives the developers full control to create, drop and update objects in the development database. They normally keep track of their changes and then collect these into one or more SQL scripts.
In some cases a quality control database exists, and so the developers will also have full control over the test database, where they can run their SQL scripts from development and merge code ready to move onto production with the code from other developers . Once they test their code in the cleaner database and workout any issues, they email their SQL scripts to the DBA.
The DBA applies the scripts to the quality database, or to the test database if a quality database is not part of the cycle. Only the DBA is allowed to run scripts in the final database prior to production whether it be quality or test. Developers are restricted from connecting to this database with the schema owners or with any account that has the ability to change database objects. This insures that no object change will be made to quality/test that is not done from a script that will later be applied to production.
The DBA is also responsible for applying the script to the production database and this task is often done after hours and may need to be coordinated with the activities of the business unit and application server administrators.
Automating the DBA's Change Control process relies greatly on uniform standards in database configuration, naming standards, backups, and clones. It will require work to adapt the scripts described here, but it is possible with average skill. These scripts are the result of years of organic growth. They continue to evolve to meet the requirements of new versions of Oracle as well as changes in corporate controls. What makes them complex is also what makes them so valuable, as they can be adapted as needed. You will not find yourself spending $40,000 on a software packaged and $30,000 on an IBM consultant to install it, so that it can slowly dissolve into disuse only to be abandoned.
Download and unzip:
How It Works - The Basics
The scripts and databases all run under AIX Unix, one set of scripts per host. A single shared storage volume is NFS mounted on each host and also mapped to Windows via Samba. Scripts normally arrive from developers attached to an email addressed to the DBA. The DBA reviews the script, modifies it as needed, and saves it to a directory on the shared volume. Since the shared volume is mounted on every Oracle database host, the script is now available to the appropriate databases wherever they reside. The is no further need to copy the script.
The directory name is in the format of the host box where the database resides, the name of the database, and the name of the schema if the database host multiple schemas for distinct applications. For example the directory: RSTEST3_TEST_CASH is the CASH schema in the TEST database on the rstest3 host. When the script is saved, a prefix is added which identifies the production database and the date that the script was received and applied. All space bars are also removed and the extension name is changed to .sql if needed. For example an attached script named "update taxes.txt" would be renamed to "PROD_CASH_Feb12_update_taxes.sql" and saved to the RSTEST3_TEST_CASH directory.
The script will never be copied from this directory. When it is eventually applied to PROD, it will be called from RSTEST3_TEST_CASH directory. This insures that the same script is applied to both TEST and PROD. Only the DBA's have access to the shared storage so we also insure that script is not modified by the developers.
The shared storage directory also contains a file named PENDING.txt. This file is a list of the scripts that have been applied to the specific development/test/quality database but not yet been applied to the production database. While saving the script, it is convenient to copy it's name to the clipboard, then open the PENDING.txt file and add the script's name to the bottom of the list. Any script already in the list that has been applied to the current database will have a comment beside it's name listing the last time it was applied and if there were any ORA- errors in the log. For example the PENDING.txt list in the RSTEST3_TEST_CASH directory may look like this:
It is important not to remove the comment "#" from any comment line or else the comment will be considered a script name. It is also important not to remove the "--Last Applied" comment or else that SQL script will be considered as "new" and applied again to the database. Applying all of the scripts in the PENDING.txt list is appropriate after clones, but not normally done when one or more new scripts are being applied.
Scripts are applied in the order they are listed in the PENDING.txt file. You can change the order and then reapply all of the scripts if necessary. This would commonly be done in conjunction with a clone so the results are repeatable when applied to production.
With the PENDING.txt list updated, the next step is to run the apply_pending.sh script on the appropriate host. For example, to apply the new script only to the CASH schema of the TEST database on rstest3, we would login to rstest3 as the oracle user and run: apply_pending.sh TEST CASH -new
The "-new" parameter instructs apply_pending.sh to only apply
scripts in the Pending list that are not followed by a "--Last
Applied" comment. If a backup is completely unnecessary we
could run: apply_pending.sh TEST CASH -new -sb
Running apply_pending.sh On-Demand
Optionally we can avoid having to login to the host box by utilizing the apply_pending_on_demand.sh script. This script runs from crontab every minute and simply checks for the existence of a flag file on the host. For example the flag file might be named "apply_pending_TEST_CASH.flag" The flag file is created by a simple Windows VB script named "FlagApplyPending_TEST_CASH.vbs" and it places the flag file on the host using ftp.
When FlagApplyPending_TEST_CASH.vbs is run, it parses it's own name for the database name, TEST in this example and the schema name, CASH if a schema name is required. It then finds the host box for the database. Company standards name all production host boxes with a substring of "prod". The script uses this to determine if the target database is production. For a production database the script will simply populate a flag file with the OS username of the person that ran the script and ftp it to the appropriate host.
If the host is not production then the script will prompt you with the question "Only apply scripts that have not yet been applied?" If you answer No, then all of the scripts in the Pending list will be applied. Most likely you want to answer Yes, which will only apply the new scripts and a parameter is appended to the flag file that is sent to the host box via ftp. The information in the flag file will be parsed by apply_pending_on_demand.sh and passed as arguments to apply_pending.sh, and the apply_pending.sh script will also read the flag file in order the get the OS username of the person who ran the Windows VB script so it can send that person an email report with the resultes.
It's convenient to place a copy of FlagApplyPending_TEST_CASH.vbs in each of the shared storage directories. To configure it you only need to change it's name. For example to flag the CASH schema on the TEST database the name of the VB script is FlagApplyPending_TEST_CASH.vbs, but to flag the TIPS database the name would be FlagApplyPending_TIPS.vbs. The VB script read's and parses it's name in order to derive it's parameters. It is necessary to have the Oracle client installed on the client you from which you call the VB script because it uses the tnsping utility in order to find the host were the database is running. An account with ftp access is also required on each of the host and this account and password are configured in the VB script. The account only needs access to create the flag file in a directory such as /u01/flag_ftp, where the oracle account which runs the apply_pending_on_demand.sh has full rights. The oracle user on the host will take care of reading the flag file, deleting it, and then calling the apply_pending.sh script with the proper arguments. To avoid using ftp, an alternative approach would be to have the VB script write the flag file to the Samba attached shared storage volume and alter FLAG_DIR variable in the apply_pending_on_demand.sh to use the appropriate Unix path.
Coordinating Change Control Activity with Windows Administrators and Developers
The Windows VB script and apply_pending_on_demand.sh script can be used to coordinate production SQL script runs with Windows administrators and developers making application changes on production servers. Some change control tickets will specify that Pending SQL scripts should be run immediately before or after work done by the Windows administrators, requiring coordination of activities between the DBA and a Windows administrator who is often drawn from a pool making it necessary for the DBA to wait until a specific administrator has been assigned. With the apply on demand process, the DBA can configure a Windows VB script in directory accessible by the Windows administrator and then update the change control ticket with instructions for the Window administrator run the script at their convenience. When the Windows VB script is run it reads the OS username which is the Windows Administrator and embeds that information in the flag file sent to the database host. The apply_pending.sh script will test for the existence of a flag file and if one is found it will read the OS username from the flag file and include that user in the list of users that will receive an email report with the details of the database changes upon completion. In this way, the Windows administrator is notified when the database portion of the change is completed.
The same process can be used with Developers who in some special
cases control their own modifications to the application. Some application can also modify database structures but still
require additional changes made by SQL scripts. The DBA simply
stages the appropriate scripts in the pending list and provides the
developer with an appropriately named Windows VB script to trigger
the execution on demand.
Allowing Developers to Submit SQL Scripts to Test/Quality
Some developers submit SQL scripts two or three times a day, as well as frequently remove scripts from the pending list, and run clones. For those developers it may be best for them and the DBA if the developers are able to submit and apply scripts on their own.
The process still prevents developers from directly modifying the database and it secures the submitted scripts to insure they are not modified, and it test the system to make sure there are no invalid objects, disabled constraints, unusable indexes, that new objects are in their proper tablespaces, and so on. If an ORA- error occurs it reports it to the developers and the DBAs for review.
Normally developers attach SQL scripts to an email and send it to the DBA; but to submit their own scripts the developer simply copies the SQL script/s into a designated directory for a specific database. For example, to apply a script to the Quality TIPS database or QTIPS, the developer copies the SQL Script to a directory named QTIPS_script_drop where an Oracle domain user has full access.
From the developer's drop box directory a scheduled batch file moves the scripts to a script_in_box directory just below the database directory on the shared storage for the specific database. For example for the QTIPS database the directory would be RSTEST8_QTIPS/script_in_box. The batch file runs on a client under an account that has access to both the developers directory and the shared storage directory via Samba. This isolates the developer from the shared storage directories.
On the database host, a crontab scheduled script named apply_pending_check_in_box.sh moves the scripts from the script_in_box to the parent directory and renames them to include a prefix of the target production database and the date. The production target database is found by taking advantage of the company naming standard which means that the first letter only needs to be removed. For example the production database for QTIPS is TIPS. For different standards or special cases it would not be difficult to utilize a case statement in the script or even access a database in order to identify a production database and it's supporting databases. For databases that support multiple schemas for multiple applications, a lookup list in the script is already used.
Any space bars in the original SQL script's names are also replace by underscores. Finally the new SQL scripts names are appended to the PENDING.txt file and the apply_penidng.sh script is called with the "-new" parameter so only new scripts are applied.
Optionally the developers can modify the PENDING.txt file by
simply dropping a replacement for it into the same directory where
they would drop SQL scripts. Most of the time they would do this in
order to remove one or more scripts from the PENDING.txt list.
This does nothing to remove remove the scripts from the database,
but that can be done by a database restore or a new clone from the
production database. Both of these methods can be triggered by
the developers again making the process as responsive to the
developer's needs as possible.
What apply_pending.sh Does
The apply_pending.sh script will first verify that it can find the scripts listed in the PENDING.txt list. These scripts do not have to be in the current database's shared directory as all of the directories related to the production database will be searched. However in most situations and especially when the developer has submitted the scripts, they will be in the current database directory, which is either the test or quality database.
Once the SQL scripts existence is verified a database backup will be made. The apply_pending.sh script calls an external script named backup.sh which is the company's standard backup script, but the apply_pending.sh script can be modified to us any backup process. For example, the default call is: "backup.sh hot QTIPS -snap=3 -v". This creates a hot NetApp snapshot backup of the database volume that will be retained for 3 days. The "-v", verbose parameter will generate an emails to the DBAs and the primary contacts for the specific instance. The email recipients are configured in text files on the host which are specified in the configuration section of the backup.sh file.
Each new SQL script is applied in the order it is listed. A new script is identified as a script in the PENDING.txt file that does not have a comment of "--Last Applied". As each SQL script is run a log file for it is created in shared storage director for the current database, even if the SQL script is being read from a different directory. Start and stop timestamps as well as the log for each script is concatenated in to a single file that will be the body of an email report. When every new SQL script has been run, the PENDING.txt list is updated to included the "--Last Applied" comment beside each of the new scripts. This comments includes the timestamp that the script was applied and "with errors" is added if the log files contains any ORA- error messages. A file named: historical_pending.log is also appended with the updated PENDING.txt file in order to maintain a history of the activity.
Next the checkup.sh script is called which runs the checkup.sql script against the database. The checkup script makes one pass to compile any invalid object and reports any objects that remain invalid. It also reports phantom synonyms, objects in the wrong tablespace, disabled triggers, disabled constraints, schema owners with DBA access, or create database link privilege, tablespaces with low autoextend growth ability, excess undo and temp tablespace use, shared pool and buffer memory requirements, and many other parameters. Like the backup.sh script the checkup.sh script is written specifically for the company database standards, so it will not work properly on any database. However it is just a SQL script and can easily be adapted. If the checkup script finds any problems it, like the backup.sh script, will send an email using the list configured on the host box. The recipients included the DBA and most likely the primary contacts for the database.
When the SQL scripts are done, the complete report is emailed to primary contacts for the effected database. It acquires the email addresses from the same files used by the backup.sh and checkup.sh scripts as well as addresses acquired from a database of contacts and finally to the user who may have triggered the run using the Windows VB script that may have launched the apply_pending.sh script.
If the target database was a production database then the apply_pending.sh script updates the PENDING.txt script by commenting out the applied scripts and adding a comment noting the time they were applied to production. For example;
The historial_pending.log is simply as chronological list of the Pending list after each run of apply_pending. It is the resource to use when you want to see when a script was first applied or when it was applied to production.