At the most basic level, the word “migration” is defined as the movement of something from one place to another. Simple, right?
But depending on your interests, there are several divergent connotations brought to mind by this common word. For those interested in biology and ecology, it is the seasonal relocation of birds and butterflies: animals flying thousands of miles south every fall to escape the onset of winter. For movie buffs, it is the beginning of the movie Ice Age, where the adorable prehistoric animals migrate to flee the frightening rapidly-moving glaciers. For the anthropologists, it is the movement of human populations: wandering in search of new pastures, new land, and new economic opportunities. To those in the information technology field, however, it takes on a whole new meaning: the mass transfer of data from legacy systems to new (and hopefully better) systems.
Each type of migration, however, is also susceptible to its own respective problems and delays: natural disasters, predators, unexpected weather patterns, wars, and intergroup conflict. Data migration is offered no exception to threats, with 38% of projects failing and nearly 80% failing to meet specific objectives. What is it that causes these roadblocks and impediment of objectives? While there are innumerable hurdles to a successful large-scale enterprise data migrations, we will take a look at a few of the biggest.
Hurdle #1: Time
Time is the biggest hurdle, often becoming the indirect (or direct) cause of many other subsequent hurdles. Part of it is a systematic bias in human judgment: planning fallacies are common even in high-performance firms because there is an inherently human tendency to overestimate the likelihood of best-case scenarios. Little problems and delays are notoriously difficult to forecast, yet can compound and snowball out of control over the long-term course of a project. Thus, time adds up on several fronts, often dragging over unforeseen months or even years. Initially, many of those delays come from the vast number of steps that must be completed before the migration can even begin.
First, all the business and technical requirements for the project must be gathered. Data on the old system is mapped to the new system, providing a design for data extraction and data loading. This includes analyzing the data in the legacy system’s database and storage in order to identify every format and file type of data that will be moved. Problems with existing data must also be identified. Any corruptions within individual pieces of data can lead to errors and delays in the migration process. If it is determined that additional development is needed to read and transfer specific formats, deal with data corruptions, or go into unfamiliar devices or tools, additional development time is necessary.
Unnecessary complexity is the enemy of an efficient migration. The higher the number of legacy tools and repositories for data in the enterprise, the more time-consuming a migration will be. A menagerie of APIs can add layers of bottlenecks, time lag, and potential points of failure to a struggling migration. This can be avoided if all legacy data from proprietary repositories can be converted into a standard format to be migrated into the new archive, as is with the ZL Unified Archive. Standard format conversion ensures all legacy data can be condensed into a single repository. Following these initial steps, a test migration must then be performed in a separate test environment. Metrics are gathered from the migration, which are then reviewed by the data owner to ensure quality and comprehensiveness of the migration process as it was set up. After a successful test migration, pre-migration commences: in which the production environment is setup and configured. Next is the actual migration, which must be actively monitored as frequent errors can derail the entire process, requiring migration plans to be altered. The duration of the actual migration depends heavily upon the amount of data in the legacy system, and how well the process was set up. After all this time and all these steps it would seem the migration is complete, but summary reports and errors in ingestion must still be addressed.
Hurdle #2: Disruption and Adoption
Beyond the lengthiness of the process itself, a migration also presents a significant disruption to the everyday workflow within a company.
Data scheduled to be migrated (or data currently under migration) often cannot be edited or manipulated, resulting in workflow and business process delays. Ideally, there would be no need to interact with the legacy server and no need for APIs to extract data at a snail’s pace. But companies often implement these data “trickle” approaches so that workflows are not catastrophically uprooted. However, minimal disruption can also be facilitated by reading directly from the database and vault as is done in a migration to ZL’s environment. This drastically cuts down on migration time and disruption to workflow, without the addition of extra time.
Yet even when the data migration itself goes as efficiently as possible, retraining and adoption of new systems may present months of continued challenges. To the average end user, data can seem to transfer magically in the in the night: an ephemeral journey through the ether. Human skills and habits, however, are not so sprightly to change. Retraining and adoption, as well as letting go of the old mentalities that went with old systems, can add unforeseen time to migrations that went extremely well from a purely technical perspective. Having a new system which can “wean” end users gradually off of the legacy environment while still providing a secure environment for data, can take the edge off of the process.
Hurdle #3: Money
Who would have guessed, right? The migration process tends to be expensive. For the majority of projects there is a limited budget; and due to unforeseen circumstances and roadblocks, the cost of the migration adds up quickly. When the risks of additional time come to fruition, billable engineering and IT consultant hours explode, eating up project budgets without any physical implementation. Beyond the initial migration, a poorly completed migration ending with lost or corrupted data can lead to far more expensive fines and litigation down the road.
Simply stated, the messier the legacy system, the easier it will be to vastly underestimate the final cost of a migration. Migrating multiple legacy repository to multiple new ones is often a temporary fix at best, and a recipe for an over-budget project of exponential time and complexity. So unless your business happens to be the US Mint, you’re best off using the early-stage planning process to look for all possible ways to consolidate systems and minimize potential (money-wasting) points of failure.
Enterprise Migration: Fly or Die
So with all these potential hurdles during the migration process, one might ask whether it is worth it to migrate enterprise data at all. Why not sit tight with your legacy system, patching and duct-taping as needed to keep it running, in order to avoid the risk of losing money? This is akin to asking why a sandpiper does not stay in Alaska during the winter instead of journeying to South America with its flock. It must migrate, or it will perish.
As the volumes of data increase, it will only become more and more difficult and costly to complete a migration project. Now is the time. But unlike the sandpiper’s trip, there actually is a way to ensure the enterprise data’s migration goes smoothly. A migration method that leaves minimal impact on the legacy system, eliminates isolated and duplicated silos of data, captures relevant envelope and supplementary metadata from the legacy tools, and maintains chain of custody between old and new systems, is not only ideal, but necessary to ensure a seamless migration. With the right methods and routes, data, birds, butterflies, and humans will all migrate safely to their new homes and intended destinations.