DevOps is the collaboration between application development and IT operations. Where formerly each was walled off from the other into silos that only communicated through change requests, today's demands on IT require that the two operate in close cooperation, and that all tasks from provisioning to coding to testing to transition into production be automated, and that new features and maintenance fixes are continuously delivered.
The biggest constraint preventing this way of doing things is data. Databases and application stacks have grown enormous. Provisioning a full environment for each developer or tester on each task of each project seems to be unrealistic when each full environment might be dozens or hundreds of terabytes of storage. And so developers and testers are limited to working in a small handful of shared environments that are refreshed only every few months, with bottlenecks preventing concurrent work as a result.
Data virtualization is a solution for that data constraint. Using data virtualization, it is possible to provide each developer and tester with a complete private read-write image of the full application stack and database, for each task and project, even if the production environment is hundreds of terabytes. Combined with server virtualization, data virtualization allows developers and testers to work concurrently on different tasks with all the resources they need, and the ability to version entire environment as well as their code changes.
Agile development methods, using DevOps techniques for continuous delivery of bug-free software, require data virtualization as a tool, as much as they require other tools such as Chef, Puppet, VMware, VirtualBox, or Rally.
This presentation will describe the data constraint and its impacts on IT, and then it will explain the solution, including the details of how data virtualization works. The attendee will probably recognize the impacts of the data constraint in their own work.