Remember the days when applications were siloed and deployed on heterogeneous environments? The only way to synchronize applications’ data was to move the data through files across heterogeneous environments, OSes and protocols. It was painful and hazard-full. To address the demand of interoperability and business continuity a new kind of software appeared: Managed File Transfer solutions. The value proposition brought by those solutions at that stage was about:
- Ability to connect and run on heterogeneous systems spread out over multiple environments
- Reliability of file transfer across distributed and heterogeneous networks
Additional value was required to ensure the consistency of the value proposition:
- End-to-End visibility to demonstrate the reliability
- Loose coupling between environments to reduce the necessary workload to connect
- Data integrity to ensure the expected file is transferred to the destination and that nothing bad happened to it on its way
These requirements lead to the design of a component in charge of properly executing the file-based data synchronization.
Today enterprises looking for improving operational efficiency and costs reduction are consolidating and rationalizing their infrastructures. IT people standardize components, practices and consolidate data storage through SAN, NAS, NFS across data centers whether they are the cloud, hybrid or on-premise.
As a result, it is now a common pattern to share storage between several business applications serving the same business domain. In that perspective, synchronizing data between two applications sharing the same disk does not require any more to move a file.
Nevertheless, application owners are facing challenges, they don’t know:
- when their application can read the shared file
- when the file has been updated and if the file is ready to be consumed
- if the file properties (metadata) have been changed
- when their application can overwrite the file
Finally, they are both lacking:
- end-to-end visibility (they are no longer asking “where is my file?” but rather “what is the status of my file?”)
- end-to-end control on data security and integrity
- management of sequences and serialization
- application integration features with different levels of acknowledgment and non-acknowledgment
For those who are already practicing Managed File Transfer solution, these challenges and needs sound familiar and are perceived as a new pattern for MFT.
Managed File Transfer solutions
This new pattern naturally finds its place into a Digital Managed File Transfer Shared Service #DMFTSS initiative. It addresses the new challenges of IT: consolidate and reduce costs, on one hand, elevate the quality of service, data governance and operational efficiency on the other hand.
The “Managed Shared File” pattern relies on the deployment and availability of a component monitoring the file-based synchronization between applications with or without an effective file transfer across the network.
This pattern is natively supported by Axway Transfer CFT in combination with Axway Central Governance and this type of synchronization is managed as a flow between two applications. This brings an additional benefit to IT people looking for increased agility and software-defined infrastructures: in case of one application being moved to distinct object storage, IT Ops just need to redeploy the defined flow and the synchronization will continue to work, across the network, but still as designed initially.
Read more about digital Managed File Transfer solutions here.