The FTC released its report: “Big Data A Tool for Inclusion or Exclusion?” on January 6, 2016. This report was assisted by a public workshop held by the FTC on September 15, 2014. Since then there has been much discussion about whether or not “Big Data” is a good thing or not. In my opinion, it depends on how it is used. Which, by the way, is exactly what the FTC investigates in its report.
The report focuses on the USE of data, as opposed to the collections, analytics and storage of the data. Those items were all covered in the FTC’s 2014 report entitled “Data Brokers: A Call for Transparency and Accountability; as reported in insideARM in May 2014 and can be found here.
As for the use of data being housed by big data companies, the FTC is quick to caution on everything that could go wrong with the access of that data, but they also mention several instances where big data analytics can be helpful to consumers, an example would be those consumers with thin or no credit files.
The report outlines several questions that businesses should consider prior to pursuing big data analytics as part of their process. The report is also very clear on proper use of data so as not to be exclusionary or discriminatory. The Commission “encourages companies to apply big data analytics in ways that provide benefits and opportunities to consumers, while avoiding pitfalls that may violate consumer protection or equal opportunity laws or detract from core values of inclusion and fairness”. (Page v. of report)
Much of the report outlines actual uses of Big Data is today’s commerce, but then follows up with many instances regarding hypothetical situations where the use of that data could do harm to consumers. What I found interesting is that they were quick to discuss the harm, real or perceived, but were not as quick to accept that without the use of big data, many of the consumers the FTC are trying to protect may not have received the benefit that the use of Big Data afforded them. The report cites the example of using big data analytics to predict that particular consumers are not likely to respond to a prime credit offer.
If big data analytics incorrectly predicts that particular consumers are not good candidates for prime credit offers that credit may never be offered to these consumers. But I argue, what about the individuals that it DOES correctly predict for that prime credit offer. For example; If 300 people are granted credit that normally wouldn’t have been granted that credit because the creditor used big data analytics to market to those people, but 10 people did not receive that marketing because of that same big data analytics; doesn’t the good outweigh the harm? Or are we to stop using those same analytics because of those 10 people, and miss out on the good it did the 300? Unfortunately, as with most other situations when dealing with data, there is no 100% perfect solution.
But data is not alone in this situation – there are many other situations where we can’t have 100% but we still try – for example there may not be 100% proof that a certain medication will cure a disease – but the vast majority will opt to take that medication if the chances are “good”. This doesn’t mean that big data analytical companies aren’t constantly trying to improve the model so that they are getting closer to 100%. So the issue really is, can we get close enough so that there is an acceptable level of performance?
The report also has an intriguing Appendix, which is the ‘Separate Statement of Commissioner Maureen K. Ohlhausen’. Commissioner Ohlhausen supports the report but also voices some cautions. In her statement she says “If we give undue credence to hypothetical harms, we risk distracting ourselves from genuine harms and discouraging the development of the very tools that promise new benefits to low income, disadvantages, and vulnerable individuals.” And she closes by stating that “[her] hope is that future participants in this conversation will test hypothetical harms with economic reasoning and empirical evidence.” (Page A-2)
The report itself outlines the laws that big data must abide by such as the Fair Credit Reporting Act (FCRA), Equal Opportunity Laws and The Federal Trade Commission Act (FTCA). The Report focuses a great deal on the Equal Credit Opportunity Act (ECOA), and the various ways that big data could be used to exclude, cause disparate treatment or cause disparate impact to certain demographics. This seemed to be a general theme, with each section giving an overview of ways Big Data can be used for harm, but not an equally proportionate number of ways where Big Data could be helpful to consumers.
The report outlines many specific uses of Big Data and again, cautions companies when using big data so as to not harm consumers:
- Increase educational attainment for individual students
- Provide access to credit using non-traditional methods
- Provide healthcare tailored to individual patients’ characteristics
- Provide specialized healthcare to underserved communities
- Increase equal access to employment
However, the report seems to have left out a very large use case of Big Data, which is debt collections. It discusses use cases such as credit, employment, insurance, housing and other consumer eligibility decisions such as check authorizations or tenant screenings, but it does not mention the use of big data in the collections environment.
Collections agencies, collection law firms, debt buyers and creditors collection departments use big data and big data analytics every day, and must be very careful in the use of that so as not to cause harm to consumers. The same harm that is mentioned by the FTC in their report could result in collectors improperly using big data in their daily Decisioning activities.
For example, if a collector is on the phone with a consumer, and uses a skip trace, or a contact and locate type of web site to pull up data on the consumer which will help them in their talk-off they should be careful what type of data they are using. They should also be certain that the data and the company they are receiving it from is following proper regulations surrounding that data. Especially if the data being pulled will help the collector decide whether or not to offer a settlement, and how much that settlement should be, or to help decide what a monthly payment arrangement should be, or any other type of discussion during the talk-off which may be either beneficial or detrimental to the consumer. If data is used during that Decisioning conversation, that data is covered under the FCRA and should be treated as such. If that data is misused, it could definitely cause disparate impact to the consumer.
Overall, the FTC’s report is a good wake-up call for those using big data, and for those who may not think they are using big data, but are. It’s a great opportunity for everyone in our industry to take a second look at the data they are using, where they are getting it from, and whether or not they are using it correctly. There can be a fine line between the proper and improper use of data, and the FTC is obviously going to be watching closely to make sure not only big data companies are properly collecting, analyzing and storing their data; but also companies using that data are properly using it.