KDnuggets Home » News » 2015 » May » Opinions, Interviews, Reports » Interview: Linda Powell, Consumer Financial Protection Bureau (CFPB) on Data Governance for Finance Industry ( 15:n17 )

Interview: Linda Powell, Consumer Financial Protection Bureau (CFPB) on Data Governance for Finance Industry


We discuss the chief data officer role at CFPB, big data opportunities and challenges, ontology, vintage data, data governance trends, advice, and more.



linda-powellLinda Powell is the Chief Data Officer of the Consumer Financial Protection Bureau. Previously, she was the CDO at the Office of Financial Research within the U.S. Treasury Department and Chief of the Economic Data Management and Analysis section in the Research Division at the Board of Governors Federal Reserve System.

She has a BA in Economics from Rutgers University and an MS in quantitative Finance from George Washington University. She has over 25 years of experience in the finance industry including at the Federal Reserve Bank of NY, Board of Governors, OFR, and CFPB.

Here is my interview with her:

Anmol Rajpurohit: Q1. Can you describe your role at Consumer Financial Protection Bureau (CFPB)? What are the typical challenges that you face on a regular basis?

cfpbLinda Powell: The Chief Data Officer role oversees the CFPB's governance, acquisition, documentation, storage, analysis, and distribution of data. There are several advantages to having the life cycle of data centrally managed. The first is the increased ability to ensure strong internal controls and adherence to best practices. There are also economies of scale related to data management. Therefore, having data management centralized creates efficiencies and helps to ensure consistency across the Bureau. An advantage for this role at a new agency is that we don’t have legacy systems or processes that we need to accommodate.

AR: Q2. How has the rapid rise of Big Data in the last decade impacted the finance industry? What are top opportunities and challenges?

big-data-financeLP: The volume of data in the finance industry has grown exponentially in the last decade. In 2005 a 500 gig dataset was considered big. By 2011 the industry was working with multiple terabyte files. When working with large and complex datasets the need for good data management practices increases. This increased volume should further encourage standards and best practices across the industry.

AR: Q3. How do you define "ontology"? Why is it important for data management?

ontologyLP: An ontology is a dictionary where the definition is derived in part by relationships. I like to use the analogy that an ontology is like a forest of family trees. In a family tree Linda Powell can be defined as a woman, a mother, a wife, a daughter. Although I don’t change, how I’m defined depends on the relationship. If you add multiple family trees the relationships grow to include aunt, sister in-law, daughter in-law, co-worker.

AR: Q4. What do you mean by "vintage data"? What is its relevance for research on effectiveness of financial policies?

vintage-dataLP: Because reported data can be revised, the data can have different values depending on when you do your analysis. For example, GDP is released and may be revised as new information is received. Vintage data means that you can look at what was the value on any specific date. This is important in evaluating the quality of policies. In order to evaluate the quality of decisions you need to know what information was available at the time you had to make the decision.

AR: Q5. How do you assess the current maturity of data standards in finance industry? crisis

LP: The financial crisis highlighted the need for data standards in the financial sector. The finance industry is now starting to embrace data standards. This is evidenced by initiatives such as the Legal Entity Identifier.