Reduce Ediscovery Storage Costs Through Nearlining
Nearline. If this term isn’t in your ediscovery vocabulary, it should be. Nearlining is a feature that allows litigation teams to quickly and easily set aside irrelevant documents and keep them separate from the active document roster in a discovery database. By moving non-critical data from active to nearline in a document review tool, databases remain nimble and hosting costs are reduced, as nearline storage is less expensive.
Nearlining Promotes Speed
Ediscovery can move slowly at times, and most litigation teams are aware of the cost and time needed to effectively gather and isolate relevant documents in the ediscovery process. As such, they are eager to begin the process of reviewing (or at least compiling) documents sooner rather than later. Kroll Ontrack’s nearlining function allows those involved in the process to do just that, as described in a recent ediscovery.com Review case study. Documents can be added to the pool early on, regardless of whether search terms have been determined. This allows litigation teams to collect broadly at the start and then hone in on relevant documents later, without adding increased costs.
Keep All of the Data at Your Fingertips
Ediscovery is an ongoing process, and the definitions of what data is appropriate or relevant to a matter change frequently, causing undue delays or costs. As discussed in a recent ESI Report podcast, nearline technology in ediscovery.com Review allows users to set aside 100,000 documents in just 35 seconds with nothing more than the click of a mouse. The data is removed from the primary pool of relevant data but remains completely accessible. At any time, nearlined data can be brought back into the main pool, if the definition of what data is relevant or appropriate changes as the matter progresses.
Reduce Volume, Save in Data Hosting
A clutter-free data set is a beautiful thing. Nearlining streamlines the process of removing clutter by allowing users to quickly and easily identify and remove the obviously irrelevant “junk” data. This not only reduces the overall data footprint and saves on data hosting costs, but it also keeps the primary data pool smaller. Searches of that data pool will therefore be faster and much more focused.