Composite Index of National Capability

International Legal Research

Information about Composite Index of National Capability in free legal resources:

Treaties & Agreements

International Organizations

Jurisprudence $ Commentary

European Union

IP Law

Contents

Composite Index of National Capability [CINC]

Composite Index of National Capability

Introduction

“Power” – here defined as the ability of a nation to exercise and resist influence – is a function of many factors, among them the nation’s material capabilities. Power and material capabilities are not identical; but given their association it is essential that we try to define the latter in operational terms so as to understand the former. This manual examines some of the more crucial issues in the selection and construction of indicators of these capabilities, discusses the implications of the various options, and indicates the decisions made by the Correlates of War project for the Composite Index of National Capacity. It presents with detail the terminology and definitions of each indicator, data collection techniques, problems and irregularities, and data manipulation procedures. Additionally, it functions as a guideline for reading the data set and provides a bibliography. Not all of the decisions undertaken were optimal, and often the trade-offs are difficult. Nor did the enterprise start from scratch. Historians, social and physical scientists, military analysts, and operations researchers have examined the ideas of power base, national strength, and material capabilities. As the bibliography makes clear, about two dozen authors have tried to develop – and generate data for – indicators of national attributes. We profited greatly from these prior efforts, be they speculative, empirical, or both. This literature has been of great assistance, especially in illuminating the difficulties and highlighting those myriad strategies we have avoided.

General Considerations

There are certain general considerations we must note before turning to the specific dimensions in any detail. First and foremost is that of comparability across a long time period (1816 to the present) of a staggering variety of territorial states, peoples, cultures, and institutions at radically different stages of their economic, social, and political development at any given moment. An indicator that might validly compare a group of European states in 1960 may very well be useless in comparing one of them to a North African state in the same year, let alone 1930 or 1870. We selected our indicators from among those that were both meaningful at any given time and that had roughly the same meaning across this broad time range. This requirement limited our choices, even in the statistically better endowed post-World War I years.

Various caveats must be made concerning the validity of the indicators the project selected. The first of these is comparison, which relies on the sometimes questionable assumption that equal values of the same indicator make equal contributions to capability. To differentially weight the contributions of individual nations entails questions that the project was not ready to address. Certain indices where this caution especially applies are noted later.
A second caveat concerns the choice of coding rules given several equally plausible alternatives. Here, the purpose is that the value assigned to the underlying concept not be highly sensitive to this choice. In some cases, we estimated this sensitivity by recollecting data for a sample subset, applying alternative choices, and determining their distribution of data values around those previously gathered.

A third caveat is information sources. We consulted several sources. We were particularly interested in series having long runs of data from multiple sources overlapping the same time period because this allowed better discrimination of reliable figures. Given different volumes of the same series, we used the most recent data reported, although alert to the possibility that revisions reflected manipulation by the reporting nation or changes in the methods of reporting, rather than improvements in accuracy.

A fourth caveat is the role of estimation. It is not surprising that we could not find all the requisite information. We did not expend considerable time and effort to produce a series complete save for some small remaining bit of ignorance. Rather, we filled in the gaps through interpolation, where it was reasonable to assume that the difference in values of the endpoints of the gap were accurate and that the change rate between them was uniform. We discuss this further under particular sections. In the case of missing data or lack of comparability among sources, we often resorted to bivariate regression of the known values on time, using the latter to estimate all the data in the series. A contrast between the two methods is that estimates obtained by interpolation are assumed correct even if they depart from the long-run trend. Estimates obtained by regression assume that the true change rate is constant over a longer sequence of several known data points, of which the endpoints and all other reported values may be in error. The approach we used depended on the context of all that was known about each individual case.

A fifth caveat data availability and the inevitability of error. Most of the indicators used in the Correlates of War project are generated by the application of operational criteria and coding rules to the often ambiguous “traces” of history. In some cases we can be quite confident about the reliability of this approach because we ourselves developed the data. In other cases, we rely on apparently precise numerical traces recorded by others at earlier times with coding and scaling criteria ranging from unknown to inconsistent. For instance, given that our definitions of national territories sometimes differ from source definitions, and given the imprecision of the latter, the figures we obtained may have reflected these incorrect boundaries. Likewise, errors could have been introduced through efforts to correct for boundary changes.

Error could also arise from inappropriate uses of estimation. The assumption (in the case of interpolation) of accurate endpoints or (in the case of regression) that transient residuals in documented values do not represent historically real fluctuations may be wrong. In either case, the assumption of constant change rates may have been mistaken. While we sought to leave no stone unturned, the reporting of national statistics is a recent practice. As one moves further back toward 1816, statistical availability and quality deteriorates. Given the paucity of documentation, figures and estimates of inferior reliability often were the only kind available. In those cases, and despite the possibility of error, we had no choice but to identify, select, and combine numerical estimates of evidence, hoping that we have recognized and taken account of differing criteria.
Given the multiplicity of interpretations as well as the difficulty of validation, we expect alternative national capability indicators to be put forth with some regularity well into the future. This leads us to a brief consideration of the dimensions and indicators of capability we adopted and why. We intended to tap the scholarly consensus on the major components of general capabilities, and not the development of the most comprehensive predictor of success in diplomacy, crisis, or war. The extent to which these capabilities do account for such success is an empirical question and there is mounting evidence that the two differ in important ways.

Basic Dimensions

The project selected demographic, industrial, and military indicators as the most effective measures of a nation’s material capabilities. These three indicators reflect the breadth and depth of the resources that a nation could bring to bear in instances of militarized disputes. Why have we treated only demographic, industrial, and military indicators of national capabilities? Why have not geography of location, or terrain, or natural resources (all of which clearly affect material capabilities) been addressed? Location, for example, could be important in several senses: island and peninsular states are often more able to trade with a larger number of others, are somewhat more defensible against invasion, emphasize sea power over land power (thus appearing less able to threaten another with invasion), and have fewer close neighbors with whom to quarrel. Landlocked states are typically more restricted in their choice of trading partners, are more vulnerable to invasion, occupation, or annexation, have more immediate neighbors, and “require” greater land forces that often appear threatening. All these facets could detract from or enhance a state’s capabilities. However, they are too dyad-specific to permit valid cross-national comparison because they pertain to the relationship between nations rather than to the characteristics of a given nation. As to natural resources such as arable land, climate, and resource availability, these factors are already reflected to a considerable extent in the indicators we employed.

There is, of course, the question of effective political institutions, citizen competence, regime legitimacy, and the professional competence of the national security elites. While these are far from negligible, they contribute to national power and the efficiency with which the basic material capabilities are utilized, but they are not a component of such capabilities.

A final and major point is that while most researchers grant that the demographic, industrial, and military dimensions are three of the most central components of material strength, nevertheless they may quarrel either with (1) the specific subcomponents or (2) the decision to stay with them over nearly two centuries. These issues are dealt with later in their specific contexts. The value of uniform series throughout the period is a question that must be subject to further inquiry, and by empirical means based on datasets such as this one.

Next we address the procedures and problems of the individual indicators. Where there are important departures from core procedures, we note them in this document and in the data set itself. For each of the three indicators, we begin with an introductory section and follow it, for each of the two sub-dimensions on which the indicators rest, with discussions of data acquisition and generation, and data problems and potential errors.

Overview of Version 3.0

Version 3.0 of the National Material Capabilities data set is the result of several years of effort undertaken at the Pennsylvania State University by the COW2 Project. Two major updates have taken place. First, additional detail about the source for and quality of data points was added to some component sets. We hope to continue this practice in the future. Second, each component series was extended and each series was examined and in some cases was revised. A brief overview of these changes is outlined below, starting with the universal updates and moving then to individual component updates. Once those two discussions are complete, this manual then goes into greater detail about each of the six indicators of national capabilities.

Sub-Component Data

Along with overall data, the COW 2 project is releasing additional information about each separate sub-component. Each sub-component has its own separate data set (saved in Microsoft Access format) which contains new detail about the particular variable. Information in these sub-data sets includes in particular source data identification, Quality Codes, and Anomaly Codes, along with the values for the variables in each state-year. The final values for each state year are then placed in the final overall 6-component data set typically used by analysts.

Discontinuities and Source/Quality Codes

It is important to document the source of and confidence we have in our data points. Therefore, coding schemes for source and quality codes have been developed during the collection of v3.0, and included as was possible and practical during the update. For instance, the sub-component data sets include the source of the value in the data series. In many cases, we were unable to track data value to a particular source. In such cases, we have left original values, which did come from specific sources, but which we simply do not know.

In any data set, there are data points that must be interpolated, extrapolated, or estimated. Previously, COW data sets have not listed which data points are interpolated and which come from solid data sources.1 In this version of the national capabilities data set, we made these estimations transparent to users when possible by creating a quality code variable as a separate column in four of the national capability indicators. These 4 indicators are iron and steel production, primary energy consumption, total population, and urban population. It is important to note that each component has its own quality coding scheme. Because of very different coding rules and potential fluctuations, each component needed its own coding approach. For instance, total population changes very slowly, and a census every ten years is the norm. Basic growth can easily be calculated for each country, and anything that can radically alter a state’s population will most often be well documented. Examining a concept like primary energy consumption, however, it is quite possible for there to be quite rapid fluctuations in energy usage. Oil embargoes, new technologies, and wars can make energy consumption values fluctuate greatly. Therefore, this commodity has a higher standard for its data point quality, and that higher standard is reflected in its quality codes.
Ideally, these data quality codes would be a temporary element of this data set. The long-term goal of this project should be to eventually find specific data for each data point that falls short of the standard for receiving an “A” (the universal designation for a well-documented data point). As this research advances, once all data points in a series receive an “A”, the quality codes for that series would then be irrelevant and could be dropped from the data set.

A second new element added to these data sets is the identification of anomalies. One of the most routine questions that arise over any data set is the major fluctuations in data values. Oftentimes, these fluctuations reflect true changes in the data. In other cases, however, fluctuations can be introduced by the coding process itself. Changing data sources, differing conversion factors, or introducing new components can create an apparent disconnect in a data series.

In a proactive approach to these discontinuities that appear in many data sets, each component now has an anomaly code column included in the data set. When a potential discontinuity was found in a data series, it was noted and supplemental research was done attempting to identify the cause of the anomaly. In some cases, a specific cause was easy to identify and document, such as changes in population after wars or losses of territory. In such cases, the fluctuation is real, and understandable. In other cases, anomalies were created because of changes in the data structure itself, such as when switching indicators from iron to steel production. In other cases a new source introduces a jump in a series. In these cases, the apparent increase or decrease in an indicator is artificial, and the jump must be accounted for in time-series analysis of the component series. Unfortunately, there were cases where no discernable reason could be found for the anomaly between previous and subsequent data points. These points were documented and it should be the future goal of this project to fully document all the reasons for anomalies in these data sets.

Individual Data Set Updates

Each of the six indicators of national capabilities underwent revisions and updates over the course of this project. While there is more detail in the sections that follow, it is important to note at least briefly what some of the major modifications and improvements are.
o The Military Personnel Data Set was both updated and modified. It was modified from previous versions by replacing previous data with data from the U.S Arms Control and Disarmament Agency (ACDA) for all data points from 1961 until 1993. The data were also extended from 1993 forward using ACDA data where possible, supplemented with data from the Military Balance.
o The Military Expenditure Data Set was updated from 1993 to 2001.2
o The Iron and Steel Data Set was first updated to 2001. Then researchers went back through the data set and re-confirmed the entire series, re-documenting the sources for all data points in the series.
o The Primary Energy Consumption Data Set was completely re-constructed for version 3.0 of the data set. All energy values were re-calculated from raw data sources, and compiled into a total energy consumption data value for each state in a given year. The data were also extended to 2001.
o The Total Population Data Set was first updated from 1993 until 2001 using United Nations data. Then researchers went back through the data set, re-documenting the data points; some data series were replaced, and some interpolations were re-calculated.
o The Urban Population Data Set was updated from 1990 until 2001.
CINC (Composite Index of National Capability) Score

The Composite Index of National Capability (CINC) score (Singer, Bremer and Stuckey, 1972) aggregates the six individual measured components of national material capabilities into a single value per state-year. The CINC reflects an average of a state’s share of the system total of each element of capabilities in each year, weighting each component equally. In doing so, the CINC will always range between 0 and 1. “0.0” would indicate that a state had 0% of the total capabilities present in the system in that year, while “1.0” would indicate that the state had 100% of the capabilities in a given year (and by definition that every other state had exactly 0% capabilities in that year.) More specifically, the CINC is calculated using the following steps:

1) The sum of each of the six capability elements is computed separately for each year. For example, if there were 10 states in the system in a given year, the IRST values for those 10 states would be summed to create a total amount of IRST production in the system. If a state’s value is missing, it contributes nothing to the total. This creates six “total” variables for each year: total IRST, total PEC, etc. ;
2) Each state’s individual value in a year is divided by the total to create a share of the system total. For example, if a state has a MILPER value of 300, and the system total is 20000, the state’s share is 0.015. Each state now has a share-of-system value for each of the NMC six components. If a state’s individual value is missing, then the share value is coded missing; and
3) For each state, the values of the non-missing shares are averaged to produce the CINC score. So if a state had share values of 0.01, 0.02, 0.02, 0.03, 0.03, and 0.076, the CINC (average) value would be 0.031. The average is computed across the non-missing components only. Hypothetically, CINC could then be computed on as few as one component, if the other five were all missing in a given year. In practice, all observations in the NMC data set have at least two components. 83.29% of the state-year observations in the set have data on all six components; 13.76% have data on five; 2.71% have data on four; 0.23% of cases have data only on two or three components. Because CINC is sometimes computed on a varying number of components, the sum of all CINC scores across all states in the system in any year may be slightly greater than or less than 1.0.

Source: the authors of the dataset

Leave a Comment