Drive better outcomes with these eDiscovery best practices

Large data sets can be unwieldy in any industry, but they can be particularly challenging for smaller legal firms. Not only do these large data fields have to be properly reviewed and assessed, but this also needs to be done as thoroughly and accurately as possible. If smaller firms hope to compete with larger firms — and even other small firms with advanced technology — they need to learn to be able to reduce and manage their data sets as effectively as possible. Here are a few eDiscovery best practices to help you do just that.

eDiscovery Best Practices For Large Data Sets

Perform a Thorough Early Case Assessment

Your early case assessment (ECA) is one of the most important components of any case. Through your ESA, you will determine which items are going to be within your scope and which items can be safely discarded. The more thorough your ECA is, the less extraneous data your firm will need to deal with. The early case assessment will also help with:

  • Identifying major keywords and document custodians, which will control where the information is coming from and which information is important.
  • Finding the appropriate search criteria for eDiscovery documents. The ECA will be able to provide information on which documents are in scope through the testing of this search criteria.
  • Beginning the legal hold process, which will preserve any information that is needed, in addition to identifying the sources of this information.
  • Analyzing key metrics between the data that is provided and available and the data that is truly relevant to the case.

Once the ECA has been completed, a targeted and analysis review should be done on a preliminary level.

Perform Thorough Data Sampling

When you’re working with large volumes of information, it may not be possible to sort through, review, and categorize all of it. This is where data sampling comes in. Through data sampling, you find a random set of information from the larger data set. You then review and analyze these samples. Data sampling still needs to be both thorough and random, so that it’s truly representative of the data set.

Utilize Predictive Review

Because it’s prohibitively difficult to engage in manual review of very large data sets, it’s often necessary to turn to the technological advantages of predictive review. A predictive review platform can examine your earlier sample sets to learn what you are looking for regarding the eDiscovery scope.

Utilizing these sample sets, a predictive review tool will go through all of the data to find documents that are important and relevant in only a fraction of the time it would take human reviewers. This can even be done with paper documents, once scanned and run through an OCR solution. Predictive review is by far one of the most important technologies for large data sets.

Store the Data in a Secure, Accessible Way

Cloud-based hosting is one of the best solutions for large data sets. Not only does cloud-based hosting have the resources required to store these large data sets, but it also makes these datasets more accessible from anywhere in the world. When processing large data sets, cloud-based hosting will be both easier and faster, thereby making it less likely that your data will be held up due to technical issues. Of course, not all cloud-based solutions have the high-security levels required for storing legal documents; often, it’s ideal to find a cloud-based system targeted to the legal industry.

Using the above eDiscovery best practices, even very large data sets can be easily reviewed. But you do need to have the right processes and the right tools. Platinum IDS can help. Contact Platinum IDS today to learn more about managing the eDiscovery process with extra large data sets.

Predictive Coding E-Book

Author Sid Newby

More posts by Sid Newby

Leave a Reply