Imagine This…
You work at a global multinational corporation as a DevOps engineer. Your very large enterprise-class Salesforce implementation is stable and your end-users happy. Of course, you have minor fixes and extremely urgent feature requests – at least in the minds of your end-users. Requests like “this button (that’s always been there) have to be 5% larger, or users will never find it” going live every now and then. Sounds nice, doesn’t it? Now let’s make it even better, by adding the fact that you have a working continuous integration pipeline pushing those features and fixes along in full automation, dev-test-prod in hours. It sounds almost too good to be true, and here’s why.
1. What’s the Problem?
In your idyllic role at this global corporation, you sometimes spend your 9-5 hours simply waiting for things to happen, and you feel that you have lost your edge. Somewhere along the way, everyone else on your team stopped paying attention to the Salesforce Release Notes. Totally understandable – every time a new release comes out the release notes get longer and longer, and who has time to read 500+ pages?
Just as you are about to call it a week, one Friday afternoon, a frazzled developer comes to you with an urgent plea. “THE CI IS BROKEN”, they say. You know it cannot be true, you spent too much time setting it up – it must be a user error!
But as it turns out, somewhere along the way, the unthinkable happened – the Platform evolved.
Your developer prepared the fix on time, committed the metadata to the repository, and waited…and waited, and then nothing. The expected changes never reach your environments, no testing happens, and all work grinds to a halt.
To enable the development team of pushing their latest fixes, you must allow the use of latest APIs across your toolset. That’s an easy task to accomplish. You simply pull some switches, download some additional libraries, if needed, right? Oh no, not this time. The demon of backward compatibility rears its ugly head.
2. The Solution (one version of it)
In a recent Salesforce release, 44.0, a.k.a. Winter ’19, a long-awaited change to the Metadata API behavior went live, getting rid of the universally hated version numbers in retrieved filenames.
This is a good and much welcomed change, however, it required some manual intervention. Fortunately, Salesforce has provided an easy to follow guide on how to prepare your repository for the new version.
But your environment has been alive and in constant motion for years, and you have more than 30 Flows and even more Process Builders, some of them active, and some of them long deprecated. And now you discover that nobody thought to do any clean-up in the repository. Ouch! And each Flow or Process Builder has gone through at least 10 versions over time. That equates to at least 60 files in src/flowDefinitions and more than 600 files in src/flows directories. Wow!
As a self-respecting member of the IT community, you will not degrade yourself with such manual labor, so you decide to spend one of your slow afternoons that next week going through the details of all the changes and test out the new API yourself. Then you spend another afternoon writing a simple Perl script to do the migration for you. The script looks at all flowDefinition files, builds a map of active versions and a list of no-version flows, those that are in draft or obsolete status, and then regenerates your data to match the new standard.
When you test the script, you realize there were some outlier cases you didn’t think about — old files in your repository, which have always had some mis-formatted XML nodes in them, from before you built your glorious CI pipeline. This includes flows that were deactivated in the environment, but never committed to the repository, and flows having version 11 mentioned in the flowDefinition, for which you only have flow versions up to 10 in your branch.
You fix your script to cover the scenarios. You run it, and now you’re happy. All is fine in the world again and the pipeline can use the new version. You learn to live with people coming to you every now and then for a few weeks, saying some of the flows are in incorrect versions after migration, and you address those as they arise.
3. The Easier Solution with AutoRABIT
Now let’s add one more element to our idyllic storyline: your CI tool is AutoRABIT.
“What does that change?” you ask. Let me explain.
Your pipeline feeds off a working repository, which is a single point of entry to all your environments, for all metadata types, a.k.a., a “Single Source of Truth”. On the side, you have a secondary repository with metadata backups. That one holds everything and gets scheduled updates from your main environments daily, plus you have jobs set up to pull from any additional environments to dedicated branches whenever needed.
In this scenario, there is no need to familiarize yourself with the migration process. You simply switch to the latest API in AutoRABIT, run the branching baselines to feed your backup repository, and then use that data to replace the files in your main repository. THE END. All in, a job takes about half an hour – just select a proper environment to pull from, and push changes to the working repository.
Of course, for sanity, you will need to stop your pipeline in both cases, since your migration will contain a lot of deletions – you don’t want to have failing destructive changes stopping your builds – but the effort needed in this option is significantly lower, more efficient and much easier to manage.
Jakub Platek is an Enterprise Architect at AutoRABIT. You can connect with him on Linkedin.