BY / JDA WMS Application Server — Backup Strategy
While there are good backup practices and strategies followed for Database Servers — be it Oracle or SQL Server. However, not the same is done by most when it comes to BY/JDA WMS Application Servers.
Mostly, organizations have only VM level backups, and those who can afford a DR site, go along that route. But then most, who even do have a DR site, don’t test it regularly. However, it is also important to have policies and strategies in place to backup:
- Configurational data for an Application Instance
- Transactional data
Since we’re assuming that the database is being regularly backed up along with Transactional data or archive logs for Oracle — that part is not a concern. But since the general practice is (or should be) that all configurations data should come in via Rollouts (including database objects) — so if the Application Instance is missing those files, then we have a problem. Over time, a single command can change multiple times to support different internal projects. Then how are you tracking file version history for commands, triggers, and CSV's and other MOCA components?
A simple solution would be to implement a file version control tool like SVN, GIT, or GITHUB — this should take care of all Configurational data for the Application Instance.
Below is a simple example of finding different file revisions for my svn to git conversion scripts.
Similarly, you can check for differences between versions, to see what changes were made. In the below screenshot, you can see like beginning with the minus (-) are removed lines and the plus (+) are added lines between revision. So, a before and after comparison between each revision for all files is available.
OK, so this takes care of configuration data. However, for an instance, you merely cannot version control all folders — especially transactional data folders like LES\files, LES\log, or even instance-level folders like LES\data because you could be using a single repository for multiple instances to keep all objects similar between let’s say Integrated Test, Unit Test, and Development environment. You do not want an accidental change to go to higher environments without approval.
Also, since the changes in transactional data folders are so much and unique to that instance that you’ll end up with more headaches. A better, more approachable solution would be to have time-controlled backups for Production instance on a folder level. So let’s say after every hour, a backup job runs via bash or batch or PowerShell script that takes a backup and puts it into Secondary storage. This secondary storage could be on the same server, a file server, or even on protected cloud storage.
The screenshot below is from a logfile of a backup script I wrote in batch to push to Azure Blob Storage.
How do you take Application Instance backups in your organization? Please do comment below. If you are not currently taking backups, then hopefully this may persuade you to do so. Hope this helps!