Written by Margot Brandenburg Posted on February 25, 2010
Margot Brandenburg is an Associate Director at the Rockefeller Foundation in New York City, where she leads the social and environmental performance component of their Impact Investing program initiative.
Much has been made over the past year and a half of the outsized (and, it turns out, unsustainable) returns being made on Wall Street, and the negative externalities of those investment decisions for taxpayers and the public. This phenomenon, enabled by opacity, non-disclosure, and ill-purposed “creativity” in accounting and reporting, is itself a reminder that XBRL and related pieces of information architecture have a critical role to play in the standardization, transparency. and exchange of information about company and investment performance. Perhaps greater implications for the future of XBRL, however, come from a countervailing trend: the rise of impact investing and related strategies for integrating social and environmental impact in investment decisions.
There are a number of ways investors can incorporate social and environmental considerations into the due diligence and management of their investments. For purposes of simplification, it is perhaps easiest to group these into two categories: helping or requiring companies to do “less bad,” and helping or requiring them to proactively “do good.” The line between the two is arguably a blurry one, and may lose its distinction entirely in certain cases. Nonetheless, it is a helpful starting point for understanding the multiple applications of non-financial information in investing.
Activities that fall into the first category – helping companies do less bad – are typically employed with respect to large, publicly traded companies. They may take the form of shareholder resolutions or divestment strategies at the company level, or investment screens at the fund level, where fund managers avoid investments in whole industries or geographies (e.g., tobacco, Darfur) or in the worst-performing companies within a given industry (such as the coal company that pollutes more than its peers, etc.). According to a recent study by the consulting firm Booz & Co, 7% of all global assets are now screened. In the US, the US Social Investment Forum estimates this figure to exceed 11%.
Screening and related strategies for encouraging corporations to do less bad received a huge boost last month, when the SEC issued a rule providing interpretive guidance to companies for reporting on the climate change-related dimensions of their activities. The role of XBRL in facilitating the communication of climate change-related data has already been codified through the Global Reporting Initiative, a framework for sustainability reporting that became the first of its kind to utilize an XBRL taxonomy (see Sean Gilbert’s post on this blog). Additional frameworks, such as that of the Carbon Disclosure Project, may soon migrate to an XBRL standard as well. The recent SEC ruling may also pave the way for mandating the disclosure of additional dimensions of non-financial performance, such as those related to water, human rights, etc. – all of which would be usefully reported and communicated using XBRL.
While smaller in size than screening, impact investing -- investments in companies and funds that actively seek to generate a positive social and/or environmental impact while providing a financial return – is also poised to play an important and growing role in the coming years and decades. Impact investing includes sectors like microfinance and clean technology, where it has penetrated mainstream investment activity, as well as emerging sectors like water, health, and agriculture. A 2009 report by the Monitor Institute, Investing for Social and Environmental Impact, describes the rise of activity in this area and estimates that it could grow to 1% of total assets under management (estimated to be $30 trillion at the end of 2008).
Impact investing lies somewhere between philanthropy and purely commercial investment activity, in a ‘murky middle’ that is often still sub-scale, confusing, and fragmented. However, there are powerful indications that it is growing in size and coherence. Within the past few years, investment banks have launched social sector finance units, pension funds and insurance companies have created dedicated social investment funds (often alongside of negatively screened funds), and a proliferation of foundations and family offices have concentrated their assets in this area. The diversity of impact investors is matched by the range and creativity of business models that are putting this type of money to work, from microfinance banks in Cambodia to solar panel manufacturers in Ohio to agricultural cooperatives in Tanzania. In between them, a number of specialized fund managers, investment vehicles, and service providers have emerged to facilitate the intermediation of capital. The participants in this marketplace are primarily still private actors, and the mainstay of its activity remains largely outside the purview of the SEC and other regulators. However, investors, funds, and company managers active in impact investing all require extensive information on social and environmental -- as well as financial -- performance, and thus represent a large source of demand for new applications of XBRL.
The Microfinance Information eXchange (MIX) became the first organization to publish an XBRL taxonomy in service of the impact investing industry, and it was recognized by XBRL International in 2009. The MIX’s Microfinance Taxonomy 1.0 is (as one might expect from the name) specific to microfinance, which is but one sector within the broader impact investing industry. The Global Impact Investing Network (GIIN) recently published v1.0 of an XBRL taxonomy called IRIS (Investment Reporting and Impact Standards), which is designed to service a broad range of sectors and activities within impact investing. IRIS includes microfinance – and incorporates relevant elements of the MIX taxonomy for this purpose – as well as domains such as environment, community development, health, and agriculture. Like the broader impact investing industry itself, IRIS lies at the intersection of profit-making and social-purpose motivations: it emerged as a partnership between non-profit organizations whose mission is to find and scale solutions to the world’s most pressing problems, and for-profit companies with expertise in the areas of accounting, auditing, and business reporting. [Disclosure: Hitachi Consulting is one of the for-profit companies – ed.]
The creation of the IRIS taxonomy enables the industry-level data aggregation and benchmarking that are crucial for setting performance standards for this hybrid area of investment activity. These activities are being undertaken by the GIIN, with a combination of support from private and non-profit partners. The IRIS taxonomy is also being embedded in a number of related products and information services, such as portfolio management software and rating systems (notably the Global Impact Investing Rating System, or GIIRS). Standard definitions and data elements, which form a common language, must serve as the basis for the myriad pieces of infrastructure that the emerging impact investing industry requires.
The diversity of activity represented within impact investing will also require the exchange of batch data between sector-specific aggregators of information and industry-wide stewards such as the GIIN. Here too XBRL has a critical role to play. The IRIS and MIX teams are currently collaborating on the development of technological infrastructure to communicate data, using XBRL as the means of exchange. They are, moreover, doing so in such a way as to maximize the scalability and extendibility of the solution so that it can, in a future phase, support exchanges with other organizations and in additional sectors.
On February 15, the XBRL International Standards Board (XSB) released “XBRL: Towards a Diverse Ecosystem.” This Discussion Document seeks feedback from all XBRL stakeholders – developers, filers, analysts, investors, etc. -- on the future business requirements and technical roadmap for the data standard. The feedback form beginning on page 17 of the Document can be completed online; a comment letter can also be sent by email. The deadline for all submissions is March 19.
The following Q&A provides details about the XSB’s proposals and the input it seeks. It is based on written and oral interviews with John Turner, chair of the XSB for the past three years, and Chair Designate Chethan Gorur, who will take the reins in April.
1. Let’s start with just a few basics on the XSB. When was it created, who sits on the Board, and what is its mission?
Established in 2006, XSB is a dedicated group within XBRL International tasked with overseeing the production of all technical work products (like XBRL technical specifications) and ensuring all such materials are of uniformly high quality. Its nine members represent a wide cross-section of the XBRL community, representing technical architecture, product management, program management, and business reporting domains. Their biographies and additional information about the XSB can be found on the XBRL website. 2. The Discussion Document “XBRL: Towards a Diverse Ecosystem” that the XSB released last week seeks input from software developers, filers, end users (eg, investors), and others to ensure the standard’s technical evolution over the next decade and the continued pace of XBRL adoption. What circumstances led the XSB to initiate such a dialogue at this time, and what does it hope to accomplish?
There are two things we need to emphasize. First, XBRL has been a very successful and stable standard over the last decade or so. Dozens of countries have adopted XBRL, not only for financial reporting but for a broad range of business and regulatory information.
So we're not talking about changing things. We're not suggesting that people should stop using XBRL the way it is. What we’re doing is long-term planning -- our time horizon is five to ten years. And that’s why we’re reaching out to all XBRL constituencies. We're in an information-gathering mode, seeking to find the best ways of building on a very sound base to get more value out of XBRL for everyone.
Second, we are just at the discovery phase of this process. We’ve come up with specific goals and proposals for each, as described in the Discussion Document. But they’re not set in stone; there’s no fait accompli. It might be that the various XBRL communities want us to focus on these areas, or it might be that they have different ideas.
So what we're looking for is confirmation about whether the goals we’ve come up with make sense. And the best way to do that at this stage is to try and expose those goals as much as possible, then get as many people as we can to tell us what they think of them: let us know whether we’re on the right track or not. We've talked to a lot of people, but not nearly enough.
To put a point on it: We’re not just going through the motions of asking for feedback as a PR exercise; we’re really interested to hear what people think.
3. OK, let’s talk then about specific goals. As the Discussion Document details, the XSB seeks feedback for its proposals in three main areas:
Ease of use for developers
Enabling information comparability around the world
Simplifying the use of XBRL data for analysis
Could you briefly describe the main challenge in each area and the proposals the XSB is contemplating to meet it?
Ease of Use for Developers The Challenge: For those of us who deal with XBRL every day, working with it comes naturally. But the XSB is very aware there are plenty of technologists – whether they’re inside accounting and business systems vendors, or working at enterprises, governments, and regulators -- who are much more comfortable using technologies like SQL, Java, and .NET, rather than the XML-based standards that underpin XBRL.
Developers that use XML do so in a variety of ways, ranging from the very simple to the highly sophisticated. XBRL takes advantage of many of the most sophisticated mechanisms contained in the XML standards, which can be challenging. We want to work on ways to make it easier for a broader group of developers to access the standard, and to benefit from the power of XBRL-based reporting – a way to easily move performance reports, which are often inherently complex, across systems and across organizations. We need to introduce these improvements in a way that protects (indeed enhances, via the network effect that more users provides) existing investments.
Proposals: There are a number of ideas that the XSB has set out in the Document. But to choose one, perhaps the area to focus on is the idea of producing a new abstract model (a UML model in all likelihood) of XBRL that provides a way to interact with the standard using a range of alternative methods, including via SQL, via API signatures that would allow interoperable .NET and Java APIs to be constructed, and probably via enhanced interoperability with the W3C’s Semantic Web technologies. This is, obviously, a long- term project, and XBRL as a syntax would continue to be an important component in the mix. Please read the Document (p.12) for some of the other ideas that the XSB is putting forward in this area.
Enabling Information Comparability Around the World The Challenge: Since the early days of XBRL, there has been a mantra within the Consortium that XBRL models existing reporting processes, it doesn’t change them. XBRL doesn’t alter GAAP, or impose a standard chart of accounts on companies. The language merely allows, for example, US and Japanese GAAP to be encapsulated inside a taxonomy that can be used to enable electronic reporting of financial statements that conform to those local reporting standards. XBRL doesn’t provide a way to compare information that conforms to different taxonomies, as these are based on different reporting norms.
In effect, while XBRL allows you to move information around at the speed of light, avoiding rekeying and complex transformation processes at every step of a business reporting supply chain, it doesn’t currently allow you to move data between information supply chains. The challenge is to see if XBRL can do more, and provide common standards for the comparison of data for specific purposes.
Proposals: Again, there are a number of ideas in this area, but perhaps the one that we are most interested in gaining feedback on is the potential to use registries to allow the creation of specific comparators between taxonomies. These registries could be used to drive the automated comparison of information across different countries and accounting systems. An example: Petrochemical companies obviously exist across the world, but comparing their financial statements is complicated by the various accounting standards in use. An XBRL registry could be used to declare that, for the purposes of credit analysis, a concept like “cash on hand” under IFRS is the same as “cash on hand” under US GAAP as well as Japanese GAAP. Notice that this would be for a specific purpose and users would need to be aware of the way that purpose is defined. It would not allow an oil company that has to report under US GAAP to suddenly use a Japanese accounting concept. However, users of that information might be prepared to use the registry to line up the performance of many different companies around the world.
Taxonomy profiles, another of our proposals in this area, are a rather different idea. XBRL is a language, a powerful and flexible language that can help you express any kind of performance information imaginable. Its very flexibility can be a problem for some environments, so the profile mechanism is all about narrowing the way that you want XBRL to work in certain circumstances. For example, we could devise a profile that is designed to optimize data collection for prudential regulators. Another is that for internal reporting. The idea is to limit the design choices that are available for certain kinds of reporting models, making it easier for users to embrace XBRL and for software professionals to ensure that their information will be interoperable across systems. It’s not a new idea, but its one that has proven itself in multiple standards environments. Why not XBRL too?
Simplifying the Use of XBRL Data for Analysis The Challenge: Quite simply, importing XBRL into modern analytical systems can be a chore. The key problem is that wherever extension taxonomies are used, you end up with, in effect, a set of overlapping Venn diagrams, which make managing your analytical models tricky, to say the least.
Proposals: Once again, this challenge is a product of the complexity of financial and performance reporting, rather than XBRL per se – you have this exact same problem if you have companies reporting using CSV files that refer to different definitions in say Word documents (something we’d strongly recommend against, by the way!). We believe it should be possible to develop a number of techniques that make the consumption of XBRL information a simpler exercise. Fundamentally, we want to make it easier to access information in XBRL documents using techniques such as SQL, Sparql, and XQuery. The tricky bit will be to define how consuming applications should deal with the semantics of overlapping taxonomies that define data from multiple source documents.
It should be obvious that much of what we are talking about in this and other areas are not exactly overnight tasks. The XSB is thinking along the lines of a 5-10 year timetable. Agreeing, as a community, exactly what we want to tackle and in what order is the first step. 4. What kind of feedback would be useful from nontechnical and general business users, who may feel hesitant to make recommendations on a complex technology like XBRL?
One thing business people can do is to describe how they would they like to use this technology. There may well be ways they would use the technology that we on the XSB don’t know about, maybe several we had not envisioned. That kind of information is very useful and will help form our long-term strategy, which builds on the existing success of XBRL and is done with a view to compatibility and the existing investments that entities around the world have made in this technology. We want to enhance the return on those investments by making the right decisions now about XBRL’s future direction. 5. What should XBRL stakeholders do now to inform themselves of the XSB’s proposals and provide the feedback it needs to evaluate them?
Start by reading the Discussion Document! Also, be on the lookout for the webinars we are organizing to educate the community about it. Speak to your colleagues in the community, or speak to a member of the Standards Board, but most important, respond to the questions we set out in the paper, either by responding in the form of an email, or via the electronic survey. To get your viewpoint noticed, responding to the survey is by far the most effective mechanism. We want to know what your priorities are, what you think of the ideas that we’ve described in the Document, and we want to hear about your own ideas. 6. And once again, the deadline?
Many thanks to John and Chethan for the interview. While they strongly recommend that feedback about the Discussion Document be provided through the channels described above, both are happy to answer any questions about the survey itself that readers post in the Comments section of this blog.
Christopher Whalen is co-founder of Institutional Risk Analytics (IRA) and provides consulting services for auditors, regulators, and financial professionals. He edits The Institutional Risk Analyst, a weekly commentary on the institutions and financial markets that comprise the global political economy. He contributes often to publications like the New York Times and Barron's and appears regularly on CNBC. He has testified before the US Congress on a variety of financial issues.
The biggest underreported story about XBRL in the US is the FDIC. During the recent financial crisis, the almost two years’ worth of quarterly data collected in the XBRL format has given the Treasury Department valuable insight into which financial institutions have suspect holdings.
Others have taken a more skeptical view, which at the extreme may be stated as “If the FDIC implementation was so great, why didn’t it do anything to stop the financial crisis?”
What do you think? What are reasonable expectations for the role data standards in general and XBRL specifically can play in providing stability to financial systems?
Neal Hannon is entirely correct that the implementation of XBRL and other technologies by the FDIC has revolutionized the collection, validation, and distribution of bank Call Report information. But as my partner Dennis Santiago often notes in XBRL discussions, information collection is only half the job. Analysis, judgments, and decisions still need to be made downstream of the data repository. Whether or not regulators and politicians in Washington do the right thing when it comes to public policy choices has no direct relation to using XBRL to increase data transparency and accessibility, which is still the exception rather than the rule in most countries. The FDIC’s data collection effort inclusive of XBRL is the finest such public data resource in the world and is virtually free of errors. Sadly, this is the exception to the general state of non-disclosure in many industrial nations, even major nation states in Europe and Asia, where there is no public right to know. The FDIC data is a very unique resource, and XBRL is just one of many tools employed by the FDIC to make it truly world class.
I don’t understand the objections raised by people that say “XBRL didn’t do anything to stop the financial crisis.” Such statements reveal an uninformed and self-serving world view that is refuted by the facts. Dennis and I have been looking at and analyzing superbly collected and disseminated FDIC bank data since 2003; but it must be said that the FDIC’s attention to data quality has been around a long time, decades in fact. Anyone who follows our work knows that we, along with many others, have been calling attention to the problems in data quality in many financial markets. Not everyone was deluded by the sloppiness. Nassim Taleb rightly accused the financial community of this in his “Black Swan” indictments. The reality is the community conspired to make the markets even more opaque and less transparent, thwarting the basic premise behind efforts such as XBRL, namely that everyone wants greater transparency.
For example, had the data tagging, collection, and distribution protocols of XBRL been adopted in areas such as residential and commercial mortgage loan securitization and OTC derivatives, they might indeed have been made more apparent and large portions of the financial crisis might have been averted. But that wasn’t in the business interest of the purveyors of these instruments. Remember, virtually all of these toxic securities were brought out as “private placements” via Rule 144A and are thus not registered with the SEC. So criticisms of XBRL for not helping to prevent the financial crisis in this respect are quite outrageous. My hope is that in the future, perhaps through the adoption of new regulations on bank securitization by the FDIC, we will see mandated public disclosure of all OTC securities and derivatives.
These problems of opportunistic opacity continue to this day. Look at the way that the International Swaps and Derivatives Association (ISDA) and the OTC dealers are dragging their feet on the standardization of FPML, the XML dialect chosen for tagging OTC derivatives transactions, and you can see the root of the problem. We have created a culture of deception and concealment in our financial industry that seeks to hide price and other data from the public to maximize monopoly profits from trading OTC securities and derivatives. The entire retrograde construct of OTC derivatives and securities is ripe for revolution when it comes to using XML dialects such as XBRL to make these data sets available to investors and the public. I think that is a very exciting and positive message for the XBRL community and for public policy in general, but this great technology is also a threat to many entrenched constituencies in the banking world who hate transparency and fear greater openness.
But you also said that the argument for XBRL was strongest at the “upstream,” agency level; in relative terms, the business use case at the “downstream,” end-user level was “not yet mature enough and powerful enough to withstand the critical scrutiny of business case needs.”
With respect to the FDIC, do you still hold that view?
Yes we certainly do continue to hold to the view that the use case optimization of both “upstream” and “downstream” technologies follow separate discovery paths. This point is borne out in the real world. There is absolutely nothing wrong with insisting that regulatory reporting and data collection continue to become more structured via some organizing construct like XBRL while at the same time insisting that the downstream delivery of information from those central libraries to the many types of analytical engines used to support decisions remain as diverse and multilingual as possible.
The actual operational implementation of the Central Data Repository (CDR) by the FDIC validates our judgment about the benefits of XBRL to the process of collecting, validating, processing, and then distributing this data to a myriad of different consumer groups. If we trace the production and use case process, those benefits become obvious. Upstream the benefits are equally dramatic. They manifest as efficiency and quality improvements.
First, think of the FDIC-insured banks using the XBRL template to submit their Call Reports to the FDIC. The logic in the XBRL taxonomy allows the supervisory personnel at the FDIC to identify and correct any errors that may be made in the filing process. Anomalies are flagged and, where appropriate, are then subject to additional supervisory review. The process is transparent enough that we can observe it in real-time on the FDIC web services system. The benefit of XBRL has been to greatly increase the productivity of the review process and decrease cost in terms of personnel, and also improve accuracy to the point where input errors are basically eliminated. This issue of accuracy is of crucial importance to our firm, because we serve more than 20,000 retail consumers who use our automated bank ratings to inform their individual and corporate asset allocation choices in terms of bank depositories. It’s even more critical for bank counterparties who need to determine whether to extend business credit to a bank or demand payment in cash. And finally data completeness and timeliness at one regulatory agency, the FDIC, is what enables us to deliver powerful benchmarking services to other agencies, such as the SEC, to perform their mission.
Next, there is internal processing to meet a variety of internal and external consumption needs. The data in the XBRL files is parsed into numeric data to support several dozen internal supervisory and reporting applications within the FDIC and other agencies in the FFIEC. Some of these use cases involve modern desktop applications for data analysis, while others feed legacy mainframe data applications that literally go back a quarter of a century and often feed single use case needs that are driven by law and regulation. Then we have several version of the data, including XBRL documents and legacy CSV outputs that are tailored to meet the specific commercial needs of external rating agencies like us as well as the research and policy needs of government agencies and educational institutions. The FFIEC CDR downstream service layer supports PDF, CSV with a filename suffix of .SDF, and XBRL. Of interest, my partner Dennis notes that the downstream delivery standard evolving across the multiple government agencies we keep track of these days is steadily headed in the direction of PDF for reading by humans and flavors of CSV for interfacing machine-to-machine. Additional popular flavors include XLS for small data sets, SAS XPT for statisticians, and plain vanilla-XML as a verbose substitute for CSV. In all cases, the downstream analyzers have little need for the perfect accounting compliance and categorization constructs of the XBRL collection layer. These users quite reasonably demand the data is cleaned of such overhead before it ever gets to them so they can perform their work with optimum productivity.
Finally, there is public distribution. If you look at the way in which a firm like ours consumes the FDIC data, the point regarding upstream versus downstream benefits becomes clear. Our primary need regarding the use of FDIC data is the calculation of arithmetic relationships and metrics to drive our bank performance and ratings model, The IRA Bank Monitor. We consume the FDIC data in two ways.
First, we gather the bank Call Reports dynamically from the FDIC CDR facility in real time and harvest the data into a database structure that allows us to calculate preliminary ratings for a particular bank unit. The enablement by XBRL-based data collection has an enormous benefit here in terms of accuracy and timeliness, and has cut several weeks off of the waiting time for accessing many bank Call Reports. The preliminaries appear in the same timeframe as the SEC K/Q filings. We actually use the CSV output format from the FDIC. It’s purely a machine efficiency decision. Given known good data, one is actually indifferent to the format in which it’s delivered. The processing time is easily an order of magnitude faster for database injections with CSV than with XML. Remember that our firm is running individual and peer group level analytics on thousands of banks per quarter in an SQL environment, so the efficiency of the data transport and calculation regime is critical to providing a data analytics product that is up to commercial grade for both institutional and consumer users.
The second part of our data collection process involves the importation of the entire body of data from the FDIC in a legacy format which essentially recapitulates and confirms the CDR data we have already collected. We process all of the data, metrics, and ratings for the entire universe of FDIC-insured banks and then finalize the results for that quarter. As I write these comments, it is the first week in February 2010 and we have collected about 7,500 Call Reports from the FDIC CDR. Of interest, the CDR also enables us to calculate and publish a preliminary Bank Stress Index (BSI) rating for all of the banks which have reported to far, providing our clients and readers of our free commentary with access to data about the condition of the US banking industry several weeks before the FDIC press conference. Since timeliness is the key data consumption priority for most investors, the decrease in wait time due to the implementation of XBRL by the FDIC is a huge win for consumers of bank data and ratings.
IRA Preliminary Bank Stress Index (BSI) Grade Distributions -- Q4 2009
Quarter Ending 12 09
PRELIMINARY Sample count =7,278
Average BSI 8.00
Prior Quarter Ending 09 09 sample count = 8,543
Average BSI 4.44
Source: FDIC/IRA Bank Monitor
3. What is your overall opinion of the SEC's XBRL mandate in terms of the burden it places on business and its usefulness to both agency personnel and end users?
My view of the current tagging regime at SEC is that this is still a work in progress. As a bank analyst who covers over 20 publicly traded financial institutions, I don’t find nearly the same level of utility in the XBRL-tagged documents on the SEC website as I do using the bank level data from the FDIC, which as I’ve explained we have implemented into a complex model of metrics and ratings. In essence, all of my work in terms of bank unit analysis is “ready to eat” because of the way in which my partner Dennis has leveraged the upstream tagging and characterization power of XBRL to enable our downstream analysis systems. But by the time we reach the consumption phase, we are using tools like SQL to analyze subsets of the XBRL submissions, primarily the numeric values and pre-calculated metrics provided by the FDIC, instead of consuming the entire XBRL document.
When my analysis work turns to looking at the SEC filings for Citigroup or Goldman Sachs, for example, the tagged documents on the SEC website are fun to play with and provide some utility in terms of display. Ultimately, however, the tagging of footnotes and data elements in the XBRL files on the SEC site are not yet saving me much time in terms of overall work process. Compared with the highly automated analytics possible with the FDIC data, the SEC disclosure is what we call “chopped salad” in the financial data world. The key issue here is that XBRL and the related accounting taxonomy have not changed the fact that each bank I cover is allowed under GAAP rules to vary their presentation of results in some very significant ways. Whereas the FDIC bank unit data is tightly defined and reasonably consistent, the presentation of bank financials under GAAP is far more subjective – and deliberately so! Public disclosure is not a photograph or x-ray of a company’s financial performance, but instead is closer to an oil painting, with or without XBRL.
So if I were to line up the most recent 10-Qs from Citi, Goldman, and Morgan Stanley side-by-side, you will find some very significant differences in how key elements are described, defined, and presented, such as off-balance-sheet vehicles and the way in which losses from these activities are disclosed. Some banks charge-off loss via the loss reserve account, others bury the losses as non-interest expense. The fact of tagging of footnotes always is several quarters behind the current state of the art of investor relations at my banks. So since we cannot yet seem to keep the XBRL taxonomy current with the ways in which banks are allowed to disclose (or not disclose) material aspects of their financial performance in the current reporting period, the human brain and the Excel spreadsheet remain the most effective tools for working with SEC filings.
4. In his January 2009 whitepaper Bringing Transparency to the Mortgage-backed Securities Market, Philip Moyer of EDGAR Online describes the benefits he believes XBRL in a centralized MBS reporting system would bring, including better data quality, historical comparability, and reduced reporting costs (see pp. 17-18).
How useful do you think XBRL can be in bringing transparency to the MBS and other securities markets that have been at the heart of the financial crisis?
As I indicated in my earlier comments, the use of XBRL or perhaps another XML dialect is obviously attractive. My partner Dennis, who started his career in finance in the fixed income modeling world by installing an MBS software package in the Pasadena offices of Countrywide in 1991 after a decade at Rockwell International, thinks XBRL might be both overkill and undergunned for this purpose. He thinks a simpler XML structure suitable for easy integration by servicers would suffice for collateral data reporting in MBS. And he thinks deal rule modeling -- that can involve complex mathematics – might be better served by adapting from techniques pioneered by the Department of Defense community. Like most of his peers in the world of data operations, Dennis is pragmatic about using the right hammer for the right nail, because (1) cost and (2) production efficiency are the two key criteria in the world of big time securities clearing and data processing.
Given the relatively less complex nature of the information to be gathered, you could probably develop an XML-variant that was both complex in its vocabulary but relatively easy to transport compared with say an XBRL document filed with the SEC by a public company. In fact, there are already a number of well-tested XML dialects in use in the world of securities trading and clearing, so my guess is that the DTCC and other financial institutions would default to the version of XML with the least overhead cost. Remember that the Fedwire and private processing systems are part of a highly developed, highly-automated market where XML is very familiar and timeliness and per unit processing costs are the overwhelming priorities. So I would not suggest that XBRL US spend a lot of time trying to sell itself to the clearing community.
“…Our spendthrift government, the Federal Reserve System and the TBTF [too big to fail] banks together now comprise the paramount political tendency in America today… Until we break the Alliance of Convenience between the Congress, the Fed and the large, TBTF banks and force our public officials to embrace core American values regarding transparency, insolvency and accountability, we will not in my view find a way out of the crisis.
Clearly this muddle requires much more than an XBRL solution. But do you see a role for XBRL in helping to provide the required transparency?
No, the data from the Fed and Treasury is pretty transparent and is actually tagged already with some relatively simple versions of XML. The Fed, for example, makes all of its financial data available in a statistical version of XML. The one area where an XML dialect like XBRL might be useful is the OTC dealer community, but as already noted, the FPML dialect of XML has already been selected by the dealers for tagging OTC derivatives and complex structured securities.
The real trouble here has nothing to do with the data itself or the structure. The problem is that the data is not public and the dealer banks do not want to standardize these financial instruments nor the XML-dialect used to report on them because doing so would impair their monopoly power over the OTC market. As in the case of SEC disclosure, it is always important to remember that the predominant tendency on the part of human beings is to limit disclosure and thereby increase pricing power. Increased disclosure is always bad for monopolies and this must be fought every step of the way. This is why I have been such a consistent critic of OTC derivatives. The lack of transparency and disclosure in OTC derivatives is unfair and violates the most basic American standards of openness in financial markets. The OTC derivatives market of 2010 is like a bucket-shop from the 1920s, but few Americans seem to appreciate such distinctions today.
6. In your speech last year to the American Enterprise Institute (AEI) and Professional Risk Managers International Association, you said:
Believe me when I say that we have seen the wild eyed, "don't you get it" look from…our colleagues in the XBRL community. We love their idealism and their vision, and we share same. But we at IRA also live in the real world of operating and delivering decision support systems for investors and fiduciaries.
Could you expand on these thoughts? In what ways do you think the XBRL community is being “other worldly” in its goals and activities?
New technologies have an allure of transformation and value creation that makes people very excited until you actually try to implement and integrate it into the real world. The creation and implementation of XML, for example, has been relatively successful in that regard. We’ll see how it goes now that the initial invention phase has finished and -- like all technology -- the proof of the pudding turns to the tradeoffs of efficacy versus the cost of operations and maintenance.
At IRA, we like to always be mindful that there is nothing really new in technology. The basic building blocks of today’s software and hardware tools trace their legacies back to the origins of computing. Each step of the way, in terms of innovation, there are choices and tradeoffs. We like to always look at the technology innovation process as a quilt, where data and business case needs interact and the best choice, depending upon those business needs, makes itself apparent as part of the diligence process. Each one of the client silos we serve – consumer, institutional, B2B – have different technical needs and business objectives, and different consumption requirements. We see the world of XML as important building blocks that enable data collection and interoperability between systems, but they do not address the entire solution in terms of consumption. We would not be so bold as to pre-judge how our consumers use the data and ratings we supply. We prefer to listen, add our perspective, and then give the user the solution that makes the most sense for their needs.
7. In comments made to CFO.com back in 2006, you agreed that an XBRL mandate for financial reporting would expand coverage of smaller companies. Are you still optimistic that this will happen? Overall, given the difficulties of the equity research industry, how do you see the impact of XBRL on its business? Do you believe that, as advertised, XBRL will help the industry cut costs and expand coverage?
Possibly, but not because of the way the SEC or big accounting has handled the implementation to date. What we’ve heard from the smaller SEC registrants community is that XBRL is an extra step that one’s filing vendor does for you as part of preparing and submitting to the SEC. The tagging has not been internalized by the filer nor made part of their internal data collection and reporting process.
The question about the data vendors that article addressed turns on whether the implementation at SEC results in a useful repository to power an operational downstream analysis solution like the FDIC’s architecture or will just be a demonstration facility. Or to put it in very blunt commercial terms, when I can download the XBRL documents or a subset thereof containing the numerical information in SEC filings, then the existing data vendor monopoly on structured financials will be near an end. The SEC has always taken an evolutionary approach to data collection and dissemination. I suggest that it is time for the SEC to target the distribution of full tagged financials, starting with income statements, balance sheets, and statement of cash flows, as the next practical goal in terms of EDGAR modernization. While the SEC data is still “chopped salad” in terms of accounting presentation, having the data in a consistently structured, “as filed” form would be helpful. Today analysts go to each corporate web site and download disparate Excel and Adobe files to perform analysis.
….Too many extensions by different companies contribute to noncomparability, because each company is likely to do an extension for basically the same problem in a different way. So the results become difficult to compare. One way to address this is to develop very robust taxonomies, such as those in the United States. However, the problem this creates is that the more robust the taxonomy, the more difficult it is to do the mapping required. So robust taxonomies can be an impediment to adoption.
What are your thoughts on the extension issue? What do you see as the relative positives and negatives of a core US GAAP taxonomy with roughly 17,000 elements?
I agree with Mr. Trites’s statement. The way to make XBRL relevant to investors and analysts would be to start with the basic reporting template available from the commercial data vendors and build a very simple data delivery function that expands the completeness of numeric data from income statements, balance sheets, and cash flows. The current SEC data implementation leaves the data monopoly of the large commercial vendors intact, thus no progress in terms of innovation by downstream users.
The other point that must be made is that complexity of the XBRL taxonomy is a function of the business case needs of the audit profession and has very little to do with how institutional investors consume data. Remember that most of the people who work on Wall Street are focused on quantitative analysis and don’t have a clue how to perform the fundamental analysis of a bank or company. When you look at the size of the XBRL taxonomy, the only reasonable conclusion is that the audit firms are trying via stealth to turn the relatively subjective world of public company reporting into a more deterministic exercise requiring the constant services of a FASB rulings subject matter expert. Anyone who has read the Securities Act of 1934 and is familiar with the legal mandate of the SEC vis-à-vis public company reporting knows this result, apparently sought by the audit firms, is impossible in practical terms, unlikely in political terms, and conflicts with the SEC’s mission of making it possible for ordinary people to actually see and understand what’s happening in the public securities markets.
No, as my previous comments suggest. The audit firms would like to see XBRL extended to internal reporting, but for many practical reasons companies are going to fight creating an explicit link between internal management systems and external reporting. In every bank that I cover, there is a set of GAAP books and a set of “managed” books. The two worlds only meet in the CFO’s office when it is time to make public disclosure. The same goes for commercial manufacturing, retail, and service businesses. The reality is that management systems technologies such as ERP are light years ahead of accounting innovations like XBRL. There’s a strong case for CFOs of companies to internally opt to tack on reporting modules to their existing ERP instead of replacing these mission critical solutions with an auditor recommended one that changes both internal forms of corporate governance and control.
10. XBRL initiatives are now being pursued in a wide range of areas, including corporate actions, sustainability reporting, and microfinance. What uses of XBRL that go beyond the usual sphere of financial reporting and company accounting systems seem most promising to you?
All of these are promising areas, but the two key questions to ask are: (1) is the data collected going to be made public, and (2) is there any standardization of the data to make it useful in terms of downstream analysis? In the US, for example, there is a push among many states and municipalities to make property records electronic. But there is no state-by-state or national template for this effort. This is a huge potential opportunity, but getting the states, the realtors and the courts to agree on a standard is another matter.
11. XBRL has now been implemented for financial reporting in most or all of the major economies, including Japan, France, and Italy, as well as smaller nations like Chile and Denmark. What can the US learn from other countries in implementing XBRL? What can other countries learn from the US adoption?
There are big lessons to learn from the two “wins” in terms of US adoption, FDIC and SEC. The FDIC effort is a success because it represents a standardized, tightly defined implementation that meets specific, legally mandated public reporting requirements. In the case of the SEC, the benefits are less clear because the task is so much more diffuse and the ability of the SEC to impose standardization is non-existent under GAAP. That is the key difference between the FDIC and SEC business case paths for XBRL adoption.
Making SEC reporting as deterministic and standardized as FDIC bank reporting is neither possible nor desirable, but that does not lessen the utility of XBRL at the SEC. Once we accept that reality, then the utility and development path for the SEC adoption of XBRL will become easier and the necessary choices will become much clearer. The world of public company reporting is always changing as the economy and markets change.
Staying entirely current with the state of the art in terms of GAAP reporting and investor relations spin is a daunting and probably impossible task given current law and resource constraints. Indeed, it is impossible. The effort to describe and add to the XBRL US GAAP taxonomy reminds me of Franz Kafka’s 1917 essay, “The Great Wall of China,” where he describes how generations of people over hundreds of years worked on sections of the Great Wall, but most never saw it completed nor knew the enormous extent of the entire task. Just as public company reporting evolves continuously and in response to many different internal and external forces, so too the adoption of XBRL will need to remain flexible and be aware that the target is constantly moving.
For many businesses filing their financial statements using XBRL to comply with the SEC mandate, the phrase “extension taxonomy” is a largely misunderstood term. There is a narrow view that it is only about adding brand-new company-specific elements which do not exist in the base taxonomy (Ex: US GAAP 2009). While adding new elements is definitely one of the purposes for creating extensions, there are many other drivers for creating an extension taxonomy. Even though some of the principles apply for other scenarios, this blog post specifically refers to extension taxonomies as they are used to meet the SEC XBRL mandate for operating companies.
The reality is that almost all of the XBRL filers to date have created a barebones extension taxonomy. Because you’ll be delivering your own unique extension taxonomy to the SEC – and a slightly modified version of the taxonomy with every quarterly and annual financial statement you certify – it’s important to understand the component parts of the document, and some basic rules to help guide you in developing it.
An extension taxonomy is not just about a new set of tags (or elements). Elements are actually very specific pieces of information, and the XBRL US GAAP Taxonomy Preparer’s Guide advises preparers to first determine if it is possible to use more general – and less specific – extension tactics.These higher-level tactics can be re-used in future periods, resulting in much less effort over time.
You can tackle creating an extension taxonomy by thinking about it as an upside-down pyramid: begin with the most general set of extension tactics, and see if that suits your purposes. If not, move to a lower level, where more specific information is required. For the purposes of this blog post, we’ve grouped the twelve methods of extending the taxonomy under four, easily remembered headings, listed mostly by the increasing levels of impact that the tactic will have upon future re-usability.
1. Relationships Help Present Your View
It is recommended that companies perform the following at a minimum: (1) create new Relationship Groups, and (2) change the Ordering of existing relationships defined in one of the Industry Entry Points. These methods of extending the XBRL US GAAP Taxonomy have the most capacity for future re-use.
A new Relationship Group allows preparers to assemble custom presentation relationships between already existing elements, tables or existing groups.
Changing the Order is as simple as reordering the children in an existing list.
In some cases, simply modifying relationships within one of the existing Industry Entry Point taxonomies can be sufficient for a business to complete their own extension taxonomy. For example, you might need to modify the order of a list as follows:
Industry Entry Point Cash Flow Statement – US GAAP Taxonomy Order
Your Company’s Cash Flow Statement – New Extension Taxonomy Order
Net Cash Provided by (Used in) Investing Activities, Continuing Operations
Net Cash Provided by (Used in) Investing Activities, Continuing Operations
Payments for (Proceeds from) Mortgage Servicing Rights
Payments for (Proceeds from) Investments
Payments for (Proceeds from) Investments
Payments for (Proceeds from) Mortgage Servicing Rights
It is also possible to (3) add New Relationships between existing elements. An example would be creating a new calculation for multiple elements.
Preparers can also (4) suppress or change a Parent-Child Relationship; however, this is typically done only to resolve a validation error or calculation inconsistency.
Adding New Relationships or suppressing Parent-Child Relationships both have slightly more impact on future re-use of the extension taxonomy, and should be investigated only after determining if the labeling methodologies below can first do the job.
2. The Role of Labels
The SEC recently presented its findings from a review of the Year 1 XBRL filings recommending that an “element label should match its line item caption”. It is recommended, especially for any kind of tabular data, to modify the labels on elements (especially tabular information) to match your financials.
There are three ways labels could play a part in your extension taxonomy: change the Preferred Label on a Presentation Relationship, add a new Abstract Heading Element, or simply add or change Element Labels.
Changing a Presentation Relationship’s Preferred Label helps dictate how data is displayed. An example would be to change the preferred label on a line item’s presentation relationship from “Terse” to “Negating”, ensuring that the numerical data displayed for that line item has its sign flipped.
Adding a new Abstract Heading Element simply provides a new heading, under which child items can be grouped. Preparers should first determine if they can modify the label of an existing abstract element prior to creating a new one.
Adding or changing Element Labels does not modify the element’s definition or references, both of which are the crucial pieces of data used to define that element. This is always preferable to creating a new element, which is described below under the section on elements, and has less future potential for reusability.
Of all labeling methodologies, the Preferred Label on a Presentation Relationship concept is the most difficult for people to understand, and warrants further explanation. Every tag or element has several different types of display labels, each of which can be used in different places throughout your financial statement without modifying the underlying data.
Element or “Tag”
Possible Labels for the Element or “Tag”
Element or “Tag” Preferred Label (as defined in the Presentation Relationship)
Displayed On Your Financial Statement
Resulting Label to be Displayed
Goodwill, Beginning Balance
Goodwill, Ending Balance
Goodwill, Beginning Balance
Different parts of your financial statement can reference different label types using Preferred Labels, changing the information displayed on your financial statement without actually changing the tag itself, or the underlying data. In one section of your financials, you might want to change the preferred label for [Goodwill] to “Period End” in order to display “Goodwill, Ending Balance” on your financial statement.
3. Extensions and XBRL Tables
XBRL Tables are powerful constructs, allowing you to group, display and reformat data – all without changing the underlying relationships between the data included in the table itself. For the purposes of this blog post, we will assume that readers have a basic understanding of XBRL tables and terminology, which can be found in Chapter 5 of the XBRL US GAAP Taxonomy Preparer’s Guide.
There are three ways to extend using tables, add a new (8) Domain Member to an Existing Table Domain, add a new (9) Axis to an Existing Table, and add a (10) New Table.
Adding Domain Members is one of the most common ways of extending taxonomies, especially as you get into detailed tagging. As an example, the Domain “Major Types of Debt and Equity Securities” might be composed of the following Domain Member Elements: “U.S. Treasury Notes”, “Corporate Debt Securities” and “Equity Securities”. Adding a new Domain Member Element to an existing Domain essentially adds a new table column, into which you can add financial data.
A table Axis contains one or several Domains. Adding an Axis to a table allows you to group Domains together for more complicated reporting requirements. Imagine that you are reporting “Assets by Type” (your Axis) and need to also break them out by “Assets by Location” (your new Axis). Adding the new location Axis would allow you to create a master column by asset type, under which assets by location could be individually reported.
Before adding a New Table, first determine if you can modify one of the tables available in the existing XBRL US GAAP Taxonomy Industry Entry Points, either by adding Domain Members or Axes as described above. Adding a new table is more difficult than modifying an existing one – requiring that you modify, create or define additional elements, attributes, and relationships. But in some situations it will be necessary.
The example below illustrates how adding a new Axis can help you group data for more detailed reporting situations than the standard taxonomy might allow. In this scenario, the Axis “Reporting Segment” was added to the original table, facilitating this grouping:
<< Axis 2
US Federal Government
State of Maryland
US Federal Government
State of Maryland
<< Axis 1
Entity-Wide Revenue, Major Customer, Amount
<< Line Items
“Entity-Wide Revenue, Major Customer”
4. Add New Elements Only When Required
So far, so good. We’ve made it through nine of twelve methods to extend the existing taxonomy to suit your unique business situation – all without creating new elements or tags. But in some cases, you’ll need to do just that. As we’ve already said, creating new elements or tags has the most impact on future re-use, so do your best to repurpose what already exists prior to extending using the methods below. This is one of the areas which has the highest impact on comparability of data across companies – and will be one which regulators will watch very closely.
Adding new elements should only be done once the existing taxonomy has been completely reviewed, including documentation on existing elements, to determine if one of the methods above can first suffice. If not, there are two ways to add a new element: adding a new (11) Numeric Element, or adding a new (12) String,Text Block or Other Non-Numeric Element. The scope of this post is not sufficient to cover the ins-and-outs of creating new elements, the details of which can be found in Chapter 6 of the XBRL US GAAP Taxonomy Preparer’s Guide.
There are some good rules of thumb to follow:
You will usually need to add a new Numeric Element in two situations: if you need to combine two or more line items into a single element or if you need to introduce entirely new financial reporting concepts which are not covered in the existing taxonomy. Every new Numeric Element requires both a definition, and a presentation relationship to at least one other element. When you are adding new elements it is very important that you take the time to document the purpose and reasoning behind the same. It’s usually a good idea to go ahead and develop at least one calculation relationship to one or more other elements for all Numeric Elements you create.
Adding a Non-Numeric Element (such as a text block or string) is performed when the existing tags don’t meet your needs, especially for block tagging of complete notes, individual accounting policies or tables/schedules. Non-Numeric elements do not participate in calculation relationships and you cannot validate content inside it using traditional calculation relationships.
Extension taxonomy documents are as important as the Instance documents you submit to the SEC. Special attention must be paid to the contents of the extension taxonomy. Extension taxonomies are much more than just about adding new elements; they include changes to Relationships, Labels, XBRL Tables, and more. Even if you have completely outsourced your XBRL preparation, it’s important to work with your outsourced provider to understand what goes into your extension taxonomy since it is one of the documents that you will be filing with the SEC.
Understanding Automated Data Management It’s important to have a sense of perspective about XBRL: it is a means, not the end, of data reporting. An XBRL system is neither the beginning nor the end of a data point; instead, think of XBRL as a postal system. The starting point begins with an accounting/finance system or other data source and the endpoint is a data repository.
For regulatory filings in America, the starting point is the filer’s accounting/finance system and the endpoint is EDGAR.
EDGAR is a data warehouse that stores pertinent information for internal use and reporting purposes. It is fed from an XBRL-based receipt and validation system, and it is designed around XBRL. The data it stores is architected around US-GAAP. EDGAR’s purpose is processing and reporting filing data.
What lies between the starting point and EDGAR are the filer’s XBRL system and the XBRL receiving processes running at the SEC. Thus, the first task for each regulatory filer is to get data into an XBRL filing. Automated data management can play an instrumental role in this task.
Getting data into an XBRL filing could be a manual drag-and-drop operation using an add-on to Microsoft Excel. This method does not save mappings, though. For your next filing, you would need to start all over again.
A better approach, therefore, is bringing data in from an accounting/finance system, using an automated process. This process is designed to be seamless. Mappings are stored permanently and reused. While setting up such a system requires up-front work for the first filing, the benefits well outweigh the costs.
Event Triggers The process of getting applicable data from an accounting/finance system or other source into the XBRL filing is triggered by an event. (Generally this event is the closing of the books.) The event triggers a data transfer to the XBRL system, which may be in the form of a direct transfer; XML, character-separated, or fixed-format data file; or an Excel spreadsheet.
How the transfer is achieved is not material — all of the above can work equally well — but the data to be transmitted must be planned in advance.
First, we must determine what data is to be acquired and transferred. Second, we need to define how to map that data to the XBRL taxonomy. Once these determinations are made, we should be able to make the data transfer, then run reports through the XBRL system in order to verify that everything looks good and is close to filing.
After the first filing, this should be a fairly quick and painless process. The first filing requires setup, mapping, testing, and quality control, but once these processes are set in place, life gets easier.
Security Generally, an issue of data transfer is whether the origin system pushes data or the receiving system pulls it. In the case of XBRL, however, the only option is to push. The XBRL system has no basis of knowing when books have closed in order to procure a transmission. As well, it has no idea of how to go into an accounting system to get the data. There is a security issue in play: few businesses will allow a third-party vendor to reach into the guts of their accounting system to obtain information. It remains more secure, and more desirable, to prepare a push on one’s own terms.
Consistency Here is a caveat with spreadsheets: typically, spreadsheets are embellished with reporting logic that makes them more readable, but these embellishments cause a mapping issue. What is easier for a human to read is not necessarily machine-readable.
Consider the following example. The element net property and equipment may be embellished in a spreadsheet as net property and equipment net of accumulated depreciation of $x. That may be easier to read (to a human), but, upon export, the suffix net of accumulated depreciation changes with each reporting period. Thus, we would no longer have a constant — net property and equipment — to map to.
Some reporting logic embellishments in spreadsheets can actually impede machine readability. Consider the following fragment of a spreadsheet that tracks changes in operating assets and liabilities:
Changes in operating assets and liabilities (net of dispositions and acquisitions):
A human can easily understand this: the entries for “Accounts Receivable”, “Inventories”, and “Deferred Costs” refer to changes to each of these values.
A computer cannot take this context into consideration, however. The line for changes in accounts receivable has the title “Accounts Receivable”, so a computer program could merge this with the value for Accounts Receivable. In other words, the machine would conflate the change in a value with the value itself. This is unacceptable; it should not be allowed to happen.
A better approach is directly correlating to the chart of accounts. This means that character-separated, fixed-format, or XML files are better suited as source files. These provide a consistent and direct correlation to the financial/accounting system, which makes the mapping process easier and more manageable.
Text-Based Disclosures Automated processes work well for financial reports like the balance sheet and income statement, but disclosures present other issues. These generally come from text or “EDGARized” files.
For an automated process to work, a disclosure source file must have a consistent format. In many cases, reporting numeric facts can be automated, but text-based disclosures must be manually dragged and dropped to complete the filing.
Optimally, disclosures can go into a spreadsheet or processable data file with a heading and disclosure body. This would be machine-readable. An EDGARized file may be machine-readable if it contains consistent heading tags. Otherwise, disclosures may process from an XML or character-separated file.
Conclusion Even considering the challenges listed here, an integrated and automated approach is the best solution for serious filers. Excel spreadsheet add-on programs simply are unable to provide automated integration to financial/accounting systems. The small investment of up-front work required to properly define the integration will quickly pay off in terms of improved functionality and reliability.