Project: FREE policy brief

Pay-for-Performance and Quality of Health Care: Lessons from the Medicare Reforms

Pay-for-Performance and Quality of Health Care Policy Brief Image

Health care attracts major attention in terms of hospital and physician reimbursement, owing to the large share of public expenditures and the presence of welfare issues demanding regulation. The focus of this policy brief is quality adjustments of prospective payments in the health sector. Using the data on the 2013 reform in Medicare, we show differential effects of value-based purchasing, where price setting is related to benchmark values of quality measures. The theoretical and empirical evidence indicates that unintended effects appear for acute-care U.S. hospitals at the best percentiles of quality. The findings provide insights into benchmarking within pay-for-performance schemes in health care.

Overview

The Russian national project “Health”, which was started by the federal government a decade ago and has expanded to regionally financed hospitals, is an example of a public remuneration scheme targeted at increasing health care efficiency. The project emphasized the role of the primary sector and raised salaries of general practitioners. A part of salaries was linked to patients’ assessment of the quality of health care. The reimbursement was seen as a means to stimulate higher quality.

However, cautiousness is required in introducing such payment mechanisms. Indeed, international experience shows that quality-related pay in health care may lead to heterogeneous effects across different groups of providers. A recent CEFIR working paper uses administrative panels of the U.S. hospitals to analyze the changes in quality owing to the introduction of the quality-pay.

The U.S. Health Care Sector

Pilots of pay-for-performance

In the early 2000s, numerous private and public programs linking quality and reimbursements in health care existed in the U.S., mostly at employer or state level (Ryan and Blustein, 2011; Damberg et al., 2009; Pearson et al., 2008). A nationwide pilot of quality-performance reimbursement started with the Hospital Quality Incentive Demonstration, where quality measures for five clinical conditions (heart failure, acute myocardial infarction, community-acquired pneumonia, coronary-artery bypass grafting, and hip and knee replacements) were accumulated from voluntarily participating hospitals. Some of these quality-reporting hospitals opted for the pay-for-performance project (initially established for 2003-2006, and later extended to 2007-2009). The project provided respectively 2% and 1% bonus payments for hospitals in the top and second top deciles of each quality measure (as of the end of the third year of the project). Hospitals in the bottom two deciles, on the other hand, were to receive 1-2% penalties (Kahn et al., 2006). Overall, the financial incentives helped improving the quality of the participating hospitals, but the improvement was inversely related to baseline performance (Lindenauer et al., 2007). Moreover, low-quality hospitals required most investment in quality increase; yet, they were not financially stimulated (Rosenthal et al., 2004).

The accumulation of the measures within the Hospital Quality Incentive was followed by the launch of the Surgical Care Improvement Project (SCIP) and Hospital Consumer Assessment of Healthcare Providers (HCAHPS). HCAHPS was the first national standardized survey with public reporting on various dimensions of patient experience of care. The measures of the clinical process of care domain are collected within the Hospital Inpatient Quality Reporting (IQR) program. These are measures for acute clinical conditions stemming from the Hospital Quality Incentive (i.e. acute myocardial infarction, heart failure, pneumonia), as well as measures from the Surgical Care Improvement Project and Healthcare Associated Infections.

The 2013 reform of Medicare

The success of the pilot project in the U.S. in terms of average enhancement of hospital quality has resulted in the nationwide introduction of these reimbursement policies. Namely, a value-based purchasing reform started at Medicare’s acute-care hospitals in the fiscal year of 2013. The reform decreased Medicare’s prospective payment to each hospital by a factor α and redistributes the accumulated fund. As a result of this rule, all hospitals performing below the mean value of the aggregate quality are financially punished, as their so-called adjustment coefficient is less than unity. At the same time, hospitals above the mean value are rewarded (See details in the Final Rule for 2013: Federal Register, Vol.76, No.88, May 6, 2011.)

The aggregate quality – called the total performance score – is a weighted sum of the scores of the measures in several domains: patient experience of care, clinical process of care, outcome of care, and efficiency. The scores on each measure are based on the hospital’s position against the nationwide distribution of all hospitals. In short, positive scores are given to hospitals above the median, and higher scores correspond to performance at the higher percentiles. The scores are a stepwise function, assigning flat values of points to subgroups within a given percentile range. Hospitals above the benchmark (the 95th percentile or the mean of the top decile) are not evaluated according to their improvement relative to the performance in the previous year.

If one assumes that hospitals are only maximizing profit, then such a linear payment schedule should stimulate quality increases across all spectrums of hospitals. However, the theoretical literature generally separates the hospital management, interested in profits, from the physicians who make decisions affecting the level of quality. In particular, physicians are treated as risk-averse agents, who have a decreasing marginal utility of money; that is, their valuation of monetary gains of a certain size decreases as their income increases. In such behavioral model (Besstremyannaya 2015, CEFIR/NES WP 218) physicians’ decisions about the quality of care is shaped by the trade-off between the potential losses they may incur if fired in case of hospital budget deficit and/or bankruptcy and their own costly effort to maintain and improve quality.

In this respect, the reform introduced two mechanisms: (1) it decreased the level of reward for low-quality hospitals and increased it for high-quality hospitals; and (2) it established a positive dependence of reward on quality. We show that the two forces compete, and the first one may outweigh the second for physicians at hospitals with high quality. Indeed, in these hospitals improved budget financing makes the bankruptcy, and probability of firing, less likely. As a result, physicians may be satisfied with a given sufficient level of a positive reward and not willing to exert any further efforts to raise the amount of this reward. Furthermore, physicians may even become de-stimulated. As a result, in these higher quality hospitals, the quality of care stabilizes or even goes down after the reform.

To sum up, we hypothesize that quality scores increase at the lowest tails of the nationwide distribution, while it may stay stable or fall among the highest quality hospitals. The sign of the mean/median effect is ambiguous.

Empirics

Data on quality measures and hospital characteristics such as urban/rural location and ownership come from Hospital Compare. The panel covers the period from July 2007 to December 2013, and consists of 3,290 hospitals (12,701 observations). We exploit first-order serial correlation panel data models – longitudinal models where the value of the dependent variable in the previous period (lagged value) becomes one of the explanatory variables (see notations and definitions of analyzed measures in Tables 1-2.) The empirical part of the study evaluates the impact of the reform on changes of the quality scores of hospitals belonging to different percentiles of the nationwide distribution of each quality measure.

Table 1. Patient experience of care

Comp-1-ap Nurses always communicated well
Comp-2-ap Doctors always communicated well
Comp-3-ap Patients always received help as soon as they wanted
Comp-4-ap Pain was always well controlled
Comp-5-ap Staff always gave explanation about medicines
Clean-hsp-ap Room was always clean
Quiet-hsp-ap Hospital always quiet at night
Hsp-rating-910 Patients who gave hospital a rating of 9 or 10 (high)

Notes: Score on each measure is the percent of patients’ top-box responses to each question.

Table 2. Clinical process of care

AMI-8a Primary PCI received within 90 minutes of hospital arrival
HF-1 Discharge instructions (heart failure)
SCIP-Inf1 Prophylactic antibiotic received within 1 hour prior to surgical incision
SCIP-Inf3 Prophylactic antibiotics discontinued within 24 hours after surgery end time
SCIP-Inf4 Cardiac surgery patients with controlled 6 a.m. postoperative blood glucose
SCIP-VTE2 Surgery patients who received appropriate venous thromboembolism prophylaxis within 24 hours prior to surgery to 24 hours after surgery

Notes: Score on each measure is the percent of percent of cases with medical criteria satisfied.

The results of the estimates offer persuasive evidence for a non-rejection of our hypotheses: quality goes up at 1-5th deciles and falls at the 6-9th deciles (see Figures 1-2).

Figure 1. Mean change of scores owing to value-based purchasing across percentile groups of hospitals

ols_reform8

It should be noted that the hypotheses concerning differential effects also rely on the fact that there is a certain population of hospitals to which each of the step-rates apply (Monrad Aas, 1995). Hence, the threshold and/or benchmark value in the national schedule may be worse than the value in a given hospital. Therefore, reimbursement with benchmarking becomes an additional cause of undesired effects.

Figure 2. Mean change of scores owing to value-based purchasing across percentile groups of hospitals

cpc_ols_reform6_

Conclusion

Our analysis confirms the presence of adverse effects of quality performance pay in health care. A remedy may be found in establishing benchmark at the value of the best performing hospital or employing ‘episode-based’ payment, which rewards a hospital for treating each patient case with corresponding criteria satisfied (Werner and Dudley, 2012; Rosenthal, 2008).

While the above results are based on the US data, they suggest that cautiousness is required in applying the pay-for-performance schemes to healthcare financing also in transition countries, and much attention should be paid to the potential adverse effects.

References

  • Besstremyannaya, Galina, 2015. “The adverse effects of incentives regulation in health care: a comparative analysis with the U.S. and Japanese hospital data” (2015) CEFIR/NES Working Papers, No.218, www.cefir.ru/papers/WP218.pdf
  • Damberg, Cheryl L, Raube, Kristiana, Teleki, Stephanie S and dela Cruz, Erin, 2009. ”Taking stock of pay-for-performance: a candid assessment from the front lines”, Health Affairs, Volume 28, pages 517-525.
  • Kahn, Charles N, Ault, Thomas, Isenstein, Howard, Potetz, Lisa and Van Gelder, Susan, 2006. “Snapshot of hospital quality reporting and pay-for-performance under Medicare”, Health Affairs, Volume 25, pages 148-162.
  • Lindenauer, Peter K, Remus, Denise, Roman, Sheila, Rothberg, Michael B, Benjamin, Evan M, Ma, Allen and Bratzler, Dale W, 2007. “Public reporting and pay for performance in hospital quality improvement”, New England Journal of Medicine, Volume 356, pages 486-496.
  • Monrad Aas, I., 1995. Incentives and financing methods, Health policy, Volume 34, pages 205-220.
  • Pearson, Steven D, Schneider, Eric C, Kleinman, Ken P, Coltin, Kathryn L and Singer, Janice A, 2008. “The impact of pay-for-performance on health care quality in Massachusetts, 2001-2003”, Health Affairs, Volume 27, pages 1167-1176.
  • Rosenthal, Meredith B, Fernandopulle, Rushika, Song, HyunSook Ryu and Landon, Bruce, 2004. “Paying for quality: providers’ incentives for quality improvement”, Health Affairs, Volume 23, pages 127-141.
  • Ryan, Andrew M and Blustein, Jan, 2011. “The effect of the MassHealth hospital pay-for-performance program on quality”, Health Services Research, Volume 46, pages 712-72.
  • Werner, Rachel M and Dudley, R Adams, 2012. “Medicare’s new hospital value-based purchasing program is likely to have only a small impact on hospital payments”, Health Affairs, Volume 31, Number 9, pages 1932-1940.

Save

Save

Save

Spatial Wage Inequality in Belarus

20161010 FREE Policy Brief with Aleh Mazol Image

This policy brief summarizes the results of an analysis of wage inequality among the districts of Belarus over the period 2000-2015. The developments in wage inequality varied noticeably by sub-periods: wage disparity decreased in 2000-2005, stayed stable in 2006-2012, and increased again during the last three years. I find evidence for spatial dependency in wages between districts, and increasing separation within districts (between rural and urban population). A decomposition of wage inequality by different quantiles of districts shows that the real wage increase rate in the lower percentiles exceeds the real wage increase rate in the higher percentiles. From a theoretical point of view, my results reject the inverted U-shaped relationship between spatial inequality and economic development for Belarus, and support the hypothesis made by the French economist Thomas Piketty that slow growth rates lead to rise in inequality.

In Belarus, wages make up approximately 60% of household income and account for 46% of GDP. The equality of the wage distribution therefore affects the scale and degree of socio-economic disconnect in the country. On the one hand, too much inequality may dampen long-term growth. On the other hand, too much equality may reduce incentives for productivity improvements.

This policy brief outlines a study (Mazol, 2016), where I examine the wage inequality concern of Belarus using annual Belstat data on district average monthly nominal wages (excluding large cities) from year 2000 to 2015, corrected by the country’s CPI index (using 2000 as the base year).

Characteristics of district wages

According to the Belarusian statistical definitions by the end of 2015, Belarus has 118 districts with an overall population of 4.9 million (excluding large cities), which corresponds to approximately 50% of total population. Average district wages relative to the national mean has increased from 74% in 2000 to 82% in 2005, indicating a catching-up process in wage income between districts and large cities (see Figure 1).

Figure 1. Decomposition of district real wages at the regional level of Belarus

figure-1Source: Author’s own calculations.

However, from 2013, the convergence of wages reverted to divergence (79% in 2015) suggesting that the relatively poor district population have become even poorer in recent years.

District wages differed by 2.8 times in 2000 and by 2.4 times in 2015. The largest number of districts with the lowest wages concentrate in the northern part of Belarus, represented by Vitebsk region with a mostly rural population, whereas districts with the highest wages are mostly in the Minsk and Gomel region, which are the central and most industrialized parts of Belarus (Minsk, Zhlobin, Mozyr and Soligorsk) (see Figure 2).

Figure 2. Map of Belarus’ districts by levels of real wages in 2015

figure-2Source: Author’s own calculations.

However, the common feature in the allocation of different levels of district wages is that the higher/lower wage districts tend to concentrate with similar districts, indicating presence of spatial dependence in the wage distribution.

Spatial interdependencies of district wages

The spatial characteristics are tested using the Global Moran’s I statistic (Moran, 1950). A positive coefficient means that neighboring districts have similar wages and a higher value indicates an increase in the relationship.

The results show that the values of the Global Moran’s I statistic are positive and significant at the 5 percent level for the periods 2000-2008 and 2014-2015 (see Figure 3). This suggests that districts with similar high or low levels of wages tend to concentrate geographically.

Figure 3. Global Moran’s I statistic and GDP growth in Belarus

figure-3Source: Author’s own calculations.

Additionally, starting from 2012, the substantial increase in positive spatial interdependencies in wages between districts coincides with a significant decrease in economic growth. This suggests that the districts of Belarus tend to cluster more closely with each other during economic recessions, indicating a more profound formation of rich and poor clusters of districts. Such a trend could be caused by a lack of public financial resources, which restricts administrative redistribution of financial support in favor of poor districts. As a result, such districts tend to become even poorer (for example, districts in Vitebsk region).

Wage inequality in the districts of Belarus

Overall, the level of wage inequality among the districts of Belarus remains low for the studied period. Moreover, the growth rates of wages in districts with low wages are higher than in the richer districts, indicating presence of a convergence process (see Figure 4). Yet, the differences between these two groups continue to be large. In 2015, the 10th and 90th percentiles of district wages were 4.6 and 6.1 million Belarusian rubles, respectively.

Figure 4. Indexed real wage

figure-4Source: Author’s own calculations.

Regarding inequality dynamics, the country experienced a decline in wage disparity 2000-2005, but from 2013, the inequality in wages started to rise (see Figure 5) and this coincides with an economic slowdown and subsequent recession.

Figure 5. Measures of wage inequality

figure-5Note: CV – coefficient of variation. Source: Author’s own calculations.

Thus, during 2000-2015, Belarus’ accelerating levels of economic growth first led to a decrease in district wage inequality. During a time of high and stable economic growth, the level of district wage inequality was constant. But, during the last years’ negative economic growth, the district wage inequality in Belarus has started to increase again. From a theoretical point of view, these results reject the hypothesis of an inverted-U-shaped relationship between spatial inequality and economic development stated by Kuznets (1955), and confirms the hypothesis stated by the French economist Thomas Piketty (2014) that declining growth rates increase inequality.

Conclusion

My results suggest that spatial wage inequality in Belarus is a persistent phenomenon that has increased in recent years. I found evidence for a spatial dependency in wages between districts and an increasing separation within districts (between rural and urban population). These may lead to a socio-economic instability, growth of shadow economy, and even an emergence of depressed regions (for example, Vitebsk region).

In order to decrease spatial wage inequality and increase overall economic efficiency in the districts of Belarus, the government needs to implement specific policies aimed at facilitating regional drivers of economic growth through the formation of new economic centers at the district level.

References

  • Barro, Robert J.; and Xavier Sala-i-Martin, 1992. “Convergence”. Journal of Political Economy, 100(2), 223-251.
  • Kuznets, Simon, 1955. “Economic growth and income inequality”. American Economic Review, 45(1), 1-28.
  • Mazol, Aleh, 2016. “Spatial wage inequality in Belarus”. BEROC Working Paper Series, WP no. 35, 37 p.
  • Moran, Patrick, 1950. “Notes on continuous stochastic phenomena”. Biometrika, 37(1/2), 17-23.
  • Piketty, Thomas, 2014. “Capital in the Twenty-first Century”. Cambridge, Massachusetts: Harvard University Press, 696 p.
  • Smith Neil, 1984. “Uneven development”. New York, NY: Blackwell, 198 p.
  • World Bank. 2009. World Development Report 2009. “Reshaping economic geography”. Washington, D.C.: The International Bank for Reconstruction and Development, 372 p.

Expanding Leniency to Fight Collusion and Corruption

20161003 Giancarlo Spagnolo FREE Policy Brief Image

Leniency policies offering immunity to the first cartel member that blows the whistle and self-reports to the antitrust authority have become the main instrument in the fight against cartels around the world. In public procurement markets, however, bid-rigging schemes are often accompanied by corruption of public officials. In the absence of coordinated forms of leniency for unveiling corruption, a policy offering immunity from antitrust sanctions may not be sufficient to encourage wrongdoers to blow the whistle, as the leniency recipient will then be exposed to the risk of conviction for corruption. Explicitly introducing leniency policies for corruption, as has been recently done in Brazil and Mexico, is only a first step. To increase the effectiveness of leniency in multiple offense cases, we suggest, besides extending automatic leniency to individual criminal sanctions, the creation of a ‘one-stop-point’ enabling firms and individuals to report different crimes simultaneously and receive leniency for all of them at once if they are entitled to it.

Leniency provisions to fight corruption

It has been noted that leniency policies and other schemes that encourage whistleblowing — such as reward and protection policies — should work in the fight against corruption as it does in the fight against collusion (Spagnolo, 2004; Spagnolo 2008; Buccirossi and Spagnolo, 2006). Cartels, corruption, and many other types of multi-agent offenses depend on a certain level of trust among wrongdoers, which is precisely what leniency programs aim to undermine by offering incentives for criminals to betray their partners and cooperate with the authorities (Bigoni et al., 2015; Leslie, 2004).

Of course, for offenses not covered by antitrust law, such as corruption, relevant authorities may have their own ways of granting leniency and encourage reporting, such as plea bargaining, whistleblower reward programs, deferred prosecution agreements (DPAs) and non-prosecution agreements (NPAs). On the other hand, some countries have recently introduced explicit leniency programs for corruption (for example, Brazil and Mexico). Yet, we observed that those instruments do not always cover all types of sanctions, are seldom integrated with antitrust leniency, and are often under the responsibility of multiple law enforcement agencies. Hence, improvements in the legal frameworks seem to still be necessary.

Leniency in a multi-offense scenario: the case of corruption cartels

Cartel offenses may be connected to other infringements. A particularly frequent and deleterious example of a multiple offense situation is the simultaneous occurrence of collusion (bid rigging) and corruption in public procurement (OECD, 2010). While cartels are estimated to raise prices by 20% or more above competitive levels (Connor, 2015; Froeb et al., 1993), corruption may add 5–25% to total contract values (EU, 2014; OECD, 2014b). Since public procurement is a market amounting to 13–20% of GDP in developed countries (OECD, 2011), it is clear that collusion and corruption represent a serious waste of public funds, negatively impacting the quality of public infrastructure and services provided by a state to its citizens.

Authorities face then two distinct, yet inter-related, challenges to guarantee the effectiveness of public procurement: ensuring integrity in the procurement process and promoting effective competition among suppliers (Anderson, 2010). Considering that success in deterring cartels and corruption depends largely on the incentives provided to infringers to self-report, the interaction between leniency provisions for cartels and the legal treatment of corruption adds a powerful new channel to the above-noted interdependence and thus should be — and already is — a concern to antitrust and anti-corruption authorities (OECD, 2014a).

A member of a corrupting cartel that blows the whistle on the cartel and applies for leniency to the antitrust authority will likely have to disclose information on the other infringement. Such information may then be used by the relevant law enforcement authority to prosecute and punish the applicant. Thus, the risk of prosecution for other cartel-connected offenses (corruption in this case) may reduce the attractiveness of reporting the cartel (Leslie, 2006). This kind of uncertainty works against the leniency policy’s deterrence goals and may even stabilize the cartel by providing its members with a credible threat to be used to prevent betrayal among them.

Existing leniency provisions for corrupting cartels

Antitrust leniency provisions are very similar worldwide, differing mainly in terms of whether cartels are only considered administrative infringements or are also criminally liable offenses. Where there is individual criminal liability, leniency programs should cover it. Surprisingly, Austria, France, German and Italy, where cartel, or at least bid rigging, is a criminal offense, do not follow this guideline. In these jurisdictions the co-operation of an individual with the antitrust authority during the administrative proceedings may be considered a mitigating circumstance, reducing imposed penalties or even allowing a discharge, but at the discretion of the court or the prosecution, which is likely to greatly reduce the propensity of wrongdoers to blow the whistle.

On the other hand, countries do not usually have specific leniency programs for corruption. Nonetheless, self-reporting and cooperation in bribery cases are usually given great importance by authorities and may lead to leniency and even immunity, through other mechanisms such as plea agreements, no-action letters, NPAs or DPAs, but those instruments rely on prosecutorial or judicial discretion. Brazil and Mexico do have formal leniency programs for corruption, providing more certainty and thus being more attractive to an applicant, although restricted to administrative liability. Individual corruption-related criminal provisions are laid down in each country’s criminal code and follow the recommendations made by the United Nations, in the 2003 Convention against Corruption, and by the Organization for Economic Co-operation and Development, under its 1997 Convention against Corruption of Foreign Public Officials in International Business Transactions.

Since enforcement authorities for collusion and corruption differ in most cases, such an arrangement demands that the infringer seek non-prosecution through at least two separate agreements, one with the antitrust authority and the other with the anti-corruption agency. The difficulty in coordinating such agreements is an obvious issue and will vary according to the number of authorities involved and to the proximity among them, that range from divisions of the same agency, in the case of the United States (Antitrust and Criminal Divisions of the Justice Department), to organizations from different government branches (Executive and Judiciary) in most jurisdictions.

In Brazil and the United States, antitrust leniency programs can provide protection for non-antitrust violations, committed in connection with an antitrust violation. While in Brazil, this provision does not currently include corruption infringements, in the United States it does, but only binds the Antitrust Division and not any other federal or state prosecuting agencies, i.e. leniency agreements may not prevent other authority from prosecuting the applicant for the non-antitrust violation.

How to improve the current legal framework

Countries should follow Brazil and Mexico’s example and create ex ante, non-relying on prosecutorial or judiciary discretion leniency programs for corruption infringements. Unlike these programs, leniency should also cover individuals, especially in terms of criminal liability for bid rigging and corruption. The protection from lawsuits for managers and directors could then become a primary incentive for them to blow the whistle on their and their companies’ illegal acts.

Additionally, it is advisable not to depend on collaboration between law enforcement groups, but to establish clear legal provisions to allow wrongdoers to report all illegal acts simultaneously and to be confident that they will escape sanctions upon co-operation with the authorities and presentation of evidence, i.e. the creation of a ‘one-stop point’.

This ‘one-stop point’ should be available for applicants at every law enforcement agency and must prevent other agencies from prosecuting the leniency applicant. In other words, when someone approaches—as an individual or as a representative of a legal person—any authority to report crimes he is involved in, it is important to allow him to report any other crimes that he knows about in exchange for lenient treatment. In order to prevent conflicts among agencies, the authority first contacted by the wrongdoer must be obliged to immediately involve any other one who may be competent over other possible reported infringements. The self-reporting wrongdoer must be reasonably certain that he will be granted leniency for all reported wrongdoings, provided that he fulfills the legal requirements for each infringement, obviously. Failing to report all known involvement in infringements may be a reason to reduce or even revoke leniency altogether, creating a penalty plus-like provision over different areas of law and a more powerful incentive to a thorough self-report.

Information about the possibility of reporting several illegal acts at the same time, and of obtaining leniency for each one, must be consistently disseminated to minimize detection and prosecution costs, as well as to contribute to the deterrence of future criminal behavior.

Finally, we note that companies and individuals from jurisdictions where leniency provisions for corruption are highly discretionary or non-existent would be less inclined to report cartel behavior abroad when bribing foreign public officials. Despite existing confidentiality rules on leniency programs, they might not want to risk being prosecuted for corruption at home. This would possibly block antitrust leniency agreements by removing the incentives to self-report, undermining the ability to catch international corrupting cartels. To prevent that, laws should be amended to allow leniency for a company or someone that self-reports abroad, and further coordination and collaboration between agencies from different countries would be necessary to avoid stabilizing criminal collusion and undermining the effectiveness of leniency programs.

Conclusion

The fight against cartels and bribery requires efforts on a national level as well as multilateral co-operation.

Creating leniency policies to fight corruption, including foreign, and coordinating them with antitrust leniency policies, emerges as an important priority. The absence of formal leniency programs for corruption, besides hindering anti-corruption enforcement, reduces wrongdoers’ incentives to blow the whistle and collaborate in corrupting cartel cases through the risk of criminal prosecution for the corruption offense. These programs must be carefully designed, however, to avoid opportunistic behavior and thus to achieve their goal of deterrence.

In order to increase the effectiveness of leniency programs in multiple offenses cases, we suggest the creation of a ‘one-stop point’, enabling firms and individuals to report different crimes simultaneously and obtain leniency, provided that they offer sufficient information and evidence for their partners in crime to be prosecuted.

References

  • Anderson, R. D.; Kovacic, W. E.; Müller, A. C., 2010. Ensuring integrity and competition in public procurement markets: a dual challenge for good governance, in The WTO Regime on Government Procurement: Challenge ond Reform (Sue Arrowsmith & Robert D. Anderson eds.).
  • Bigoni, M., Fridolfsson, S.O., Le Coq, C., Spagnolo, G., 2015. Trust, Leniency and Deterrence, 31 J. LAW ECON. ORGAN., 663.
  • Buccirossi P.; Spagnolo, G., 2006. Leniency policies and illegal transactions, 90 J. PUBLIC ECON., 1281.
  • Connor, J. M., 2014. Cartel overcharges, in The Law And Economics Of Class Actions (James Langenfeld ed.).
  • European Commission, 2014. Report from the Commission to the Council and the European Parliament—EU Anti-Corruption Report 2014.
  • Froeb, L. M.; Koyak, R. A.; Werden, G. J., 1993. What is the effect of bid rigging on prices?, 42 ECON. LETT., 419.
  • Leslie, C. R., 2004. Trust, Distrust, and Antitrust, 82 TEX. L. REV. 515.
  • Leslie, C. R., 2006. Antitrust Amnesty, Game Theory, and Cartel Stability, 31 J. CORP. L. 453.
  • OECD, 2010. Global Forum on Competition Roundtable on Collusion and Corruption in Public Procurement.
  • OECD, 2011. Public Procurement for Sustainable and Inclusive Growth – Enabling reform through evidence and peer reviews.
  • OECD, 2012. Improving International Co-Operation in Cartel Investigations.
  • OECD, 2014a. 13th Global Forum on Competition Discusses the Fight Against Corruption, Executive Summary.
  • OECD, 2014b. OECD Foreign Bribery Report: An Analysis of the Crime of Bribery of Foreign Public Officials.
  • Spagnolo, G. 2004. Divide et Impera: Optimal Leniency Programs, CEPR Discussion Paper nr 4840, available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=716143
  • Spagnolo, G., 2008. Leniency and Whistleblowers in Antitrust, in Handbook of Antitrust Economics (Paolo Buccirossi ed.), Cambridge MA: MIT Press.
  • Stephan, P. B., 2012. Regulatory Competition and Anticorruption Law, 53 VA. J. INT. LAW 53.
  • Waller, S. W., 1997. The Internationalization of Antitrust Enforcement. 77 BOSTON U. LAW REV. 343.

Russia’s State Armament Plan of 2010 – The Macro View in mid-2016

FREE Policy Brief Torbjorn Becker

Russian defense spending has increased significantly in recent years and reached over 4 percent of GDP in 2015 according to estimates. If the Russian state armament program for 2011-2020 is fulfilled, further large investments will be made in the years to come to modernize the military forces. However, the macro economic realties have change dramatically since the original plans were drawn up in 2010. This brief provides an analysis of what the new macro economic reality means for the armament plans that were made in 2010. In short, the major issue is not that spending as a share of GDP has increased dramatically but rather that the nominal ruble amounts that make up the plan amount to significantly less real purchasing power both in real ruble and dollar terms according to the most recent forecasts. In other words, it is not necessarily the trade off between different government spending areas that will be the main issue in this new macro economic environment, but rather what the priorities will be regarding different types of military equipment within the existing plan.

A 2016 study by Julian Cooper details Russia’s state armament plans for 2011 to 2020, “GPV-2020” (in Russian, State armament program is Gosudarstvennaia Programma Vooruzheniia), to the extent that is possible by using open source information. He makes a special point of discussing the non-transparent structure of Russian defense spending, which makes more precise calculations and statements regarding this expenditure area difficult or even impossible. Nevertheless, he provides broad numbers for the state armament plans that are publically available and this is used in this brief.

The plans of 2010

The state armament plans for 2011-2020 that were made in 2010 were stated in nominal ruble terms. The full path of the plan has not been announced but a total of 19 trillion rubles has been mentioned.

Figure 1. Armament and defense spending

slide1Source: Author’s calculations based on Cooper (2016)

Cooper’s study details amount until 2015 and in Figure 1, the remaining years have been guesstimated by a smooth trend that delivers a cumulative plan of 19 trillion rubles.

The armament plans were very ambitious and it is noteworthy that they were almost fully implemented during the years for which we have actual numbers from Cooper’s study (the blue and red lines almost overlap perfectly). The other rather remarkable feature is how high these spending are compared to the national defense spending reported in his report, with the GPV plan peaking at 70 percent of defense spending.

Changing macro environment

The armament plans were not made in a vacuum but decided based on the economic outlook at the time, i.e., what policy makers projected in 2010.

Figure 2. IMF forecasts and actual GDP

slide2Source: Author’s calculations based on IMF (2010, 2016). Note: The IMF’s 2010 forecast only goes to 2015 and for the remaining years a constant growth rate based on the last year is used.

Figure 2 shows what the IMF’s growth forecasts back in 2010 implied for the development of nominal GDP (dotted blue line); what actually happened until 2015 (solid red line); and what is projected to happen between 2016 and 2020 according to the latest IMF World Economic Outlook forecast of April 2016 (dotted red line). As is pointed out in Becker (2016), international oil prices are key for Russia’s growth performance and any forecast of it is no better than the forecast of oil prices. This implies that also the IMF’s April 2016 projection is highly uncertain, but this is true for any other forecast of Russian GDP as well.

There are two important observations that follow from Figure 2; first, nominal GDP at the start of the program was underestimated; and second, the growth rate was overestimated. As coincidence some times has it, two wrongs make close to a right for 2016; i.e., the forecast of 2010 almost perfectly coincides with what is expected to be the nominal GDP level in 2016 and 2017 in the latest IMF forecast. However, since the slowdown in expected growth is rather significant, in later years the IMF now expects nominal GDP to be less than what it thought it would be in 2010.

Implications for the GPV

The fact that nominal GDP in 2016 and 2017 is almost exactly the same as projected in 2010 implies that the GPV plan as a share of GDP based on the 2010 forecast compared with the 2016 forecast is almost the same in 2016 and 2017. This may be viewed as a peculiar circumstance but it can also have real implications. If the plan in 2010 was developed with a greater view of priorities in different government spending areas, the fact that the plan is still not absorbing more as a share of GDP suggest that the plan may not necessarily be a contentious issue at the level of the government.

However, this is expected to change after 2017 when nominal GDP will be lower than originally thought, and therefore the GPV share of GDP would be higher as seen in Figure 3.

Figure 3. GPV plan as share of GDP

slide3Source: Author’s calculations based on Cooper (2016) and IMF (2010, 2016)

A more immediate concern would be what the nominal spending plan from 2010 actually buys in real terms in 2016. This is a more fundamental issue than changes in nominal GDP that will affect how quickly the armed forces can modernize their equipment. Figure 4 compares how the real purchasing power of the plan has changed from the 2010 to the 2016 forecasts, both in terms of constant (or real) ruble terms (green and purple lines) and in nominal U.S. dollar terms (red and blue lines).

Figure 4. The real spending power of GPV

slide4Source: Author’s calculations based on Cooper (2016) and IMF (2010, 2016)

It is clear that there has been a significant reduction in real purchasing power both in real ruble and dollar terms. The cumulative change in real ruble terms is a loss of 12 percent in purchasing power, while the loss in dollar terms is 45 percent. Since most of the loss in spending powers is from 2014 forward, the impact in the remaining years is even higher than what these cumulative numbers indicate.

The actual impact on the spending plan will crucially depend on how much of what is planned needs to be imported but it is nevertheless clear that there has been a significant reduction in purchasing power if the initial plan in nominal ruble is implemented. This is without any consideration of the impact of sanctions or reallocating government resources to other spending areas that may be considered and would affect this calculation.

Policy conclusions

Although the precision of the discussion in this brief is no better than the accuracy of the available numbers, the general trends and qualitative conclusions made here are most likely still relevant. And without any claim of being able to assess the quality of military equipment or the ability Russia’s military industrial complex to make the right priorities (see instead Rosefielde, 2016 for such discussion), it is clear from a pure economics standpoint that the changing macro environment will have serious real implications for how quickly the modernization process of equipment can go.

It is also highly likely that the worsening of the economic outlook in 2016 compared with 2010 will lead to more general discussions of government spending priorities. Spending on producing arms by the military industrial complex could in principle be a Keynesian type of demand injection that can raise growth in the short run if there are idle resources that are put to use and generate income to workers that in turn spend more of consumption. However, it is not likely that the resources required to build sophisticated new military equipment is idle even in an economic downturn, so this effect is likely not very significant. Instead, more spending in areas that are already in short supply will generate inflation or put pressure on the exchange rate depending on how much is produced domestically and how much is imported of the demanded goods and services.

Long-term growth can also be affected if the GPV plan crowd out resources from other spending areas. The effect will of course depend on what the spending alternatives are and how this is linked to future growth; if military spending does not generate growth by itself while reducing spending on education, research and health care that we think promote long-term growth, prioritizing military spending will have an additional price in terms of reduced future growth. There could be cases where spillovers from military production are significant and spur new businesses and thus generate economic growth, but this does not seem to have been the case in the past in Russia.

In short, it will be hard for policy makers to avoid making tough decisions on what spending areas to prioritize given the new macro outlook for Russia. And even if the spending in nominal rubles in the GPV-2020 plan does not change, there will be new trade-offs to be made within the plan given how higher inflation and a depreciated currency has reduced the purchasing power of the original 2010 plan.

References

  • Becker, T, 2016, “Russia’s oil dependence and the EU”, SITE Working paper 38, August.
  • Rosefielde, S., 2016, “Russia’s Military Industrial Resurgence: Evidence and Potential”, Paper prepared for the conference on The Russian Military in Contemporary Perspective Organized by the American Foreign Policy Council, Washington DC, May 9-10, 2016.
  • Cooper, J., 2016, “Russia’s state armament programme to 2020: a quantitative assessment of implementation 2011-2015”, FOI report, FOI-R-4239-SE.
  • IMF, 2010, World Economic Outlook, October 2010 data, http://www.imf.org/external/pubs/ft/weo/2010/02/weodata/index.aspx
  • IMF, 2016, World Economic Outlook, April 2016 data, http://www.imf.org/external/pubs/ft/weo/2016/01/weodata/index.aspx

Save

Save

And the Lights Went Out – Measuring the Economic Situation in Eastern Ukraine

FREE Policy Brief Image And the Lights Went Out

This policy brief assesses the economic situation in the war-affected East of Ukraine. Given that official statistics are not available, we use changes in nighttime light intensity, measured by satellites, to estimate to what extent the war has destroyed the economy, and whether any recovery can be observed since the Minsk II agreement.

This FREE Policy Brief is simultaneously published as a column at VoxUkraine.org/en.

Correct measurement of economic performance is difficult enough in peaceful times and in scenarios, in which reliable economic indicators are available. However, when the necessary data is missing or when its reliability is far from clear, assessing the degree of economic activity – even in the most crude of forms – becomes a significant challenge. And yet, such situations are very frequent, apply to many countries and regions and become most evident at times of military conflicts when data collection is far from a top priority. In the context of the Ukrainian conflict an example of indirectly estimating changes in economic performance can be found in Talavera & Gorodnichenko (2016) who focus on measures of the degree of price integration in the so called Luhansk and Donetsk National Republics (LNR/DNR). In addition, there are various articles using anecdotal evidence to illustrate the economic losses in the East of Ukraine. For example, BBC, 2015 mentions an estimate by the Ukrainian Ministry of Economy that by mid 2015, 50% to 80% of jobs were lost in the so-called Luhansk and Donetsk National Republics, compared to the pre-war situation. Knowing the economic situation in the East is important both to assess the economic viability of the so-called Luhansk and Donetsk National Republics (LNR/DNR) as well as to assess the likely humanitarian situation in the East.

An alternative indirect way to examine the intensity of economic activity is to use measures based on satellite nighttime light intensity images. Nighttime light intensity is closely related to electricity consumption, which often has been used as an indicator of economic activity (e.g. Arora and Lieskovsky, 2014). Nighttime light intensity has been used to assess economic activity in sub-Saharan Africa (Henderson et al., 2012), the impact of the crisis in Syria (Li and Li, 2014) or to study how elected politicians favour their own regions worldwide (Hodler and Raschky, 2014). Henderson et al. (2012) find that among low- and middle-income countries, a one percent change in light roughly corresponds to a one percent change in income. [1]

In this note we use nighttime light intensity to measure economic activity in Eastern Ukraine since the outbreak of the war in the East of Ukraine in April 2014.[2] As a reference point we use the nighttime light intensity in March 2014, prior to the outbreak of violence in the East of Ukraine, and we focus on Ukraine’s capital Kyiv and a number of big and small cities in Eastern Ukraine, which we know have been heavily affected by the conflict. In table 1, we compare the light intensity at several points in time (May 2014; August 2014; January 2015; March 2015; March 2016) to the light intensity in March 2014 in these selected cities.

Figure 1. Nighttime images of Kyiv (a), Donetsk (b), and Luhansk (c) in March 2014, 2015, and 2016

(a)  Kyiv
March 2014 March 2015 March 2016
Policy Brief: measuring the economic situation in Eastern Ukraine Image 1.1 Policy Brief: measuring the economic situation in Eastern Ukraine Image 1.2 Policy Brief: measuring the economic situation in Eastern Ukraine Image 1.3
(b)  Donetsk
March 2014 March 2015 March 2016
Policy Brief: measuring the economic situation in Eastern Ukraine Image 2.1 Policy Brief: measuring the economic situation in Eastern Ukraine Image 2.2 Policy Brief: measuring the economic situation in Eastern Ukraine Image 2.3
(c)   Luhansk
March 2014 March 2015 March 2016
Policy Brief: measuring the economic situation in Eastern Ukraine Image 3.1 Policy Brief: measuring the economic situation in Eastern Ukraine Image 3.2 Policy Brief: measuring the economic situation in Eastern Ukraine Image 3.3


Notes: Radiance was linearly scaled from 0 to 10 nW/cm2/sr, where black pixels represent 0 and white represent 10 or more nW/cm2/sr. Administrative boundaries for cities: © OpenStreetMap contributors, CC BY-SA.

Figure 1 presents sample images of nighttime illumination for Kyiv, Donetsk and Luhansk in March 2014, 2015 and 2016. We can see that between March 2014 and 2015, in the case of Donetsk and Luhansk, both the surface area lit as well as the measured light intensity significantly decreased, while there is very little change in the case of Kyiv. A similar picture emerges in other cities that were not directly affected by the war, such as, for example Zaporizhia, Dnipropetrovsk and Kharkiv (see Table 1). While, as in Kyiv, there are ups and downs in terms of measured nighttime light intensity, by and large, the level of economic activity remains fairly similar over time.

Table 1. Change in nighttime light intensity across time for selected cities in Ukraine

Slide1Notes: The numbers in the table are ratios of light intensity, comparing a given point in time to March 15, 2014. Hence, number 1 suggests no change, numbers above 1 suggest improvements, and numbers below 1 suggest decreases in economic activity.

The situation is clearly different in Donetsk and Luhansk, the two major occupied towns. Nighttime light intensity in Donetsk is about half of the level it was before the outbreak of violence in the East of Ukraine. Luhansk fares even worse – light intensity as measured in March 2015 and 2016 is roughly a third of the initial level (Table 1).

Ilovaisk and Debaltseve, two cities where major battles took place and which are now under control of the so-called DNR/LNR, clearly have suffered a lot and are still far from recovering. Illovaisk is at about a third of its original level of light intensity, while Debaltseve is at less than a tenth (!) of the level in 2014. It is thus clear that economic recovery in these areas takes a long time, and that this is also true for the government controlled areas. This is illustrated by the fact that cities such as Sloviansk and to a lesser extent Kramatorsk are also still far away from their pre-conflict level of light intensity.

Conclusion

The above analysis of changes in nighttime light intensity data leads to two important conclusions. First, the impact of the war in Eastern Ukraine on the level of economic activity in the area is sizeable and varies considerably across towns. Levels of nighttime light intensity are at 30 to 50% of their pre-war level in the big cities and at only a tenth of their pre-war level in some smaller cities. Using the Henderson et al. (2012) one to one ratio of changes in nighttime light intensity and economic development, this suggest the economic activity in the Donbas region has similarly dropped in economic terms to 30 to 50% of the pre-war level for the big cities and to only a tenth of the pre-war level for some smaller cities. [3]

Second, there has been no sign of economic recovery in the region since the Minsk I and II agreements. Even though military activity in the Donbas region has decreased compared to the period April 2014-February 2015, the economy – at least as measured by the intensity of lights – has not been improving and the economic situation of the Donbas population remains very far from what it used to be before the war.

______________________________________________________

[1] ‘The elasticity of growth of lights emanating into space with respect to income growth is close to one (p. 1025)’

[2] We use version 1 nighttime monthly data from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) generated by the Earth Observation Group at NOAA National Geophysical Data Center and made publically available for download.

[3] Given the specificity of light intensity measures, we focus on changes between periods rather than levels because light intensity is computed as the sum of radiance over a selected area, and hence the level of intensity depends on the scale of the area. For comparisons over time, we always use the same geographic area. It is important to remember that these changes are proxies only since changes in light intensity can be sensitive to weather conditions over time. Thus, to be able to make informative judgement on the basis of these data, we focus on the broad picture that emerges from the data, rather than on specific values.

 

References

  • Arora, Vipin and Jozef Lieskovsky (2014), “Electricity Use as an Indicator of U.S. Economic Activity”, U.S. Energy Information Administration Working Paper.
  • BBC (2015) – Ukrainian Service, ‘ One year after the referendum DNR/LNR: Economic Losses’, May 12 2015.
  • Henderson, J. Vernon , Adam Storeygard, and David N. Weil (2012), Measuring Economic Growth from Outer Space, American Economic Review 2012, 102: 994–1028
  • Hodler, Roland, and Paul A. Raschky (2014), Regional Favouritism. Quarterly Journal of Economics 129: 995-1033.
  • Talavera, Oleksandr and Yuriy Gorodnichenko (2016), How’s DNR Economy Doing, VoxUkraine April 7, 2016
  • Xi Li & Deren Li (2014) Can night-time light images play a role in evaluating the Syrian Crisis?, International Journal of Remote Sensing, 35: 6648-6661.

Save

Effects of Trade Wars on Belarus

20160620 FREE Policy Brief

The trade wars following the 2014 events in Ukraine affected not only the directly involved participants, but also countries like Belarus that were affected through international trade linkages. According to my estimations based on a model outlined in Ossa (2014), these trade wars led to an increase in the trade flow through Belarus and thereby an increase of its tariff revenue. At the same time, because of a ban on imports in the sectors of meat and dairy products, the tariff revenue of Russia declined. As a member of the Eurasian Customs Union (EACU), Belarus can only claim a fixed portion of its total tariff revenue. Since the decline in the tariff revenue of Russia led to a decline in the total tariff revenue of the EACU, there was a decrease in the after-redistribution tariff revenue of Belarus. As a result, Belarusian welfare decreased. To avoid further welfare declines, Belarus should argue for a modification of the redistribution schedule. Alternatively, Belarus could increase its welfare during trade wars by shifting from being a part of the EACU to only being a part of the CIS Free Trade Area (FTA). If Belarus was only part of the CIS FTA, the optimal tariffs during trade wars should be higher than the optimal tariffs without trade wars. The optimal response to the increased trade flow through Belarus is higher tariffs.

Following the political protests in 2014, Ukraine terminated its membership in the CIS Free Trade Area (FTA) and moved towards becoming a part of the EU. The political protests evolved into an armed conflict and a partial loss of Ukrainian territory. These events led to Western countries introducing sanctions against some Russian citizens and enterprises. In response, Russia introduced a ban on imports from EU countries, Australia, Norway, and USA in the sectors of meat products, dairy products, and vegetables, fruits and nut products. In addition, both Ukraine and Russia increased the tariffs on imports from each other in the above-mentioned sectors.

Clearly, the trade wars affected directly involved participants such as the EU countries, Russia, and Ukraine. At the same time, countries like Belarus that were not directly involved in the trade wars, were also affected because of international trade linkages. It is important to understand the influence of trade wars on none-participating countries. To address this question, a framework with many countries and international trade linkages will be utilized and I will in this policy brief present some of my key findings.

Framework and Data

To evaluate the effects of the trade wars, I use the methodology outlined in Ossa (2014). This framework is based on the monopolistic competition market structure that was introduced into international trade by Krugman (1979, 1981). The framework in Ossa (2014) allows for many countries and sectors, and for a prediction of the outcome if one or several countries changes their tariffs. Perroni and Whallye (2000) and Caliendo and Parro (2012) present alternative frameworks with many countries that can also be used to estimate the welfare effects of tariff changes. The important advantage of the framework introduced in Ossa (2014) is that only data on trade flows, domestic production, and tariffs are needed to evaluate the outcomes of a change in tariffs, though the model itself contains other variables like transportation costs, the number of firms, and productivities.

It should also be pointed out that the framework in Ossa (2014) is not an example of a CGE model as it does not contain features such as investment, savings, and taxes. Since the framework in Ossa (2014) is simpler than CGE models, the effects of a tariff change can more easily be tracked and interpreted. On the other hand, this framework does not take into account spillover effects of tariff changes on for example capital formation and trade in assets.

The data on trade flows and domestic production come from the seventh version of the Global Trade Analysis Project database (GTAP 7). The data on tariffs come from the Trade Analysis Information System Data Base (TRAINS). The estimation of the model is done for 47 countries/regions and the sectors of meat and dairy products.

Results

According to my estimations, because of the ban on imports by Russia, the trade flow through Belarus increased. Belarusian imports of meat products are estimated to have increased by 28%, and imports of dairy products by 47%. Such increases in imports mean an increase in the tariff revenue of Belarus. It should be pointed out, however, that the model only tracks the effects of the ban on imports in the sectors of meat and dairy products. An alternative way would be to construct an econometric model that takes into account different factors influencing the trade between the countries. The effects of the decrease in the price of oil and the introduced ban on imports, which happened close in time, could then have been evaluated.

The estimated model further predicts that, because of the ban on imports, the tariff revenue collected by Russia in these two sectors has decreased by 53%. This means that since Belarus can only claim a fixed portion (4.55%) of the total tariff revenue of the EACU, its after-redistribution tariff revenue collected in the meat and dairy product sectors declined by 44.86%, in spite of its increase in before-redistribution tariff revenue by 35%. The decline in Belarus’ after-redistribution tariff revenue is thus estimated to have led to a decrease in welfare by 0.03%. To prevent such a decrease in the future, Belarus should argue for an increase in its share of the total tariff revenue of the EACU.

Furthermore, in addition to the decrease in the tariff revenue, the estimated model predicts that the real wage in Russia decreased by 0.39%, and its welfare by 0.49%.

The introduced ban on imports also affected the European countries that used to export to Russia. The model predicts that the welfare of Latvia declined by 0.38% and that the welfare of Lithuania declined by 0.27%. A substantial portion of the decline in welfare of these countries can be explained by a decrease in their terms of trade. The introduced ban on imports by Russia led to a decline in prices in the countries that exported meat and dairy products to Russia. Lower prices led to a decrease in the proceeds from exports collected by EU countries, and lower proceeds from exports buy less import, implying a decrease in their welfare.

In spite of the increase in tariffs between Russia and Ukraine, the model predicts an increase in the welfare of Ukraine by 0.23% following the formation of the EU-Ukraine Deep and Comprehensive Free Trade Area (DCFTA). An increase in real wages by 0.34% is the main factor contributing to this welfare increase. This is because it is associated with a redirection of Ukrainian exports from Russia towards the EU. The predicted increase in real wages in Ukraine have not materialized so far, presumably because of the ongoing military conflict and because time is needed to redirect the trade flows in response to the changes in the tariffs.

While bearing in mind that the analysis is only based on the sectors of meat and dairy products, Belarus could have increased its welfare during the trade wars if it had shifted from EACU status back to CIS FTA status with tariffs set at before-EACU levels. In this case, Belarus would not have needed to share its tariff revenue with other countries, and would then have increased its tariff revenue by 47.93% instead of the now predicted decline by 44.86%. Similarly, the welfare during trade wars could then have increased by 0.05%, instead of the now predicted decline by 0.03%. Another advantage of moving to CIS FTA status during trade wars is that the real wage could have increased by 0.04% instead of the 0.003% in the case of continued EACU status. Belarus could further have benefitted from moving to CIS FTA status by choosing optimal tariffs. This study suggests that the optimal tariffs of Belarus under CIS FTA status with trade wars are higher than the optimal tariffs under CIS FTA status without trade wars. Higher tariffs is the optimal response to the increased trade flows through Belarus resulting from trade wars.

Conclusion

Although it is optimal to move to CIS FTA status during trade wars, it is optimal to move back to EACU status after the trade wars are over. Therefore, such a policy should be adopted with caution, since the shift back to EACU status will likely not be possible. If it is expected that the trade wars will continue for a long period of time, or if the other members of the EACU will often deviate from the common tariffs, a transition to CIS FTA should be adopted. At the same time, asking for an increase in its share of total tariff revenue of EACU is a feasible strategy for Belarus to follow.

While estimating the effect of a transition from EACU status to CIS FTA status for Belarus during trade wars, the evaluation was done using two sectors affected by counter-sanctions. To evaluate the full welfare effect of this transition, its effect on the other sectors of Belarus should also be estimated, which is a question for the further research.

Traces of Transition: Unfinished Business 25 Years Down the Road?

FREE Policy Brief Image

This year marks the 25-year anniversary of the breakup of the Soviet Union and the beginning of a transition period, which for some countries remains far from completed. While several Central and Eastern European countries (CEEC) made substantial progress early on and have managed to maintain that momentum until today, the countries in the Commonwealth of Independent States (CIS) remain far from the ideal of a market economy, and also lag behind on most indicators of political, judicial and social progress. This policy brief reports on a discussion on the unfinished business of transition held during a full day conference at the Stockholm School of Economics on May 27, 2016. The event was organized jointly by the Stockholm Institute of Transition Economics (SITE) and the Swedish Ministry for Foreign Affairs, and was the sixth installment of SITE Development Day – a yearly development policy conference.

A region at a crossroads?

25 years have passed since the countries of the former Soviet Union embarked on a historic transition from communism to market economy and democracy. While all transition countries went through a turbulent initial period of high inflation and large output declines, the depth and length of these recessions varied widely across the region and have resulted in income differences that remain until today. Some explanations behind these varied results include initial conditions, external factors and geographic location, but also the speed and extent to which reforms were implemented early on were critical to outcomes. Countries that took on a rapid and bold reform process were rewarded with a faster recovery and income convergence, whereas countries that postponed reforms ended up with a much longer and deeper initial recession and have seen very little income convergence with Western Europe.

The prospect of EU membership is another factor that proved to be a powerful catalyst for reform and upgrading of institutional frameworks. The 10 countries that joined the EU are today, on average, performing better than the non-EU transition countries in basically any indicator of development including GDP per capita, life expectancy, political rights and civil liberties. Even if some of the non-EU countries initially had the political will to reform and started off on an ambitious transition path, the momentum was eventually lost. In Russia, the increasing oil prices of the 2000s brought enormous government revenues that enabled the country to grow without implementing further market reforms, and have effectively led to a situation of no political competition. Ukraine, on the other hand, has changed government 17 times in the past 25 years, and even if the parliament appears to be functioning, very few of the passed laws and suggested reforms have actually been implemented.

Evidently, economic transition takes time and was harder than many initially expected. In some areas of reform, such as liberalization of prices, trade and the exchange rate, progress could be achieved relatively fast. However, in other crucial areas of reform and institution building progress has been slower and more diverse. Private sector development is perhaps the area where the transition countries differ the most. Large-scale privatization remains to be completed in many countries in the CIS. In Belarus, even small-scale privatization has been slow. For the transition countries that were early with large-scale privatization, the current challenges of private sector development are different: As production moves closer to the world technology frontier, competition intensifies and innovation and human capital development become key to survival. These transformational pressures require strong institutions, and a business environment that rewards education and risk taking. It becomes even more important that financial sectors are functioning, that the education system delivers, property rights are protected, regulations are predictable and moderated, and that corruption and crime are under control. While the scale of these challenges differ widely across the region, the need for institutional reforms that reduce inefficiencies and increase returns on private investments and savings, are shared by many.

To increase economic growth and to converge towards Western Europe, the key challenges are to both increase productivity and factor input into production. This involves raising the employment rate, achieving higher labor productivity, and increasing the capital stock per capita. The region’s changing demography, due to lower fertility rates and rebounding life expectancy rates, will increase already high pressures on pension systems, healthcare spending and social assistance. Moreover, the capital stock per capita in a typical transition country is only about a third of that in Western Europe, with particularly wide gaps in terms of investment in infrastructure.

Unlocking human potential: gender in the region

Regardless of how well a country does on average, it also matters how these achievements are distributed among the population. A relatively underexplored aspect of transition is to which extent it has affected men and women differentially. Given the socialist system’s provision of universal access to education and healthcare, and great emphasis on labor market participation for both women and men, these countries rank fairly well in gender inequality indices compared to countries at similar levels of GDP outside the region when the transition process started. Nonetheless, these societies were and have remained predominantly patriarchal. During the last 25 years, most of these countries have only seen a small reduction in the gender wage gap, some even an increase. Several countries have seen increased gender segregation on the labor market, and have implemented “protective” laws that in reality are discriminatory as they for example prohibit women from working in certain occupations, or indirectly lock out mothers from the labor market.

Furthermore, many of the obstacles experienced by small and medium-sized enterprises (SMEs) are more severe for women than for men. Female entrepreneurs in the Eastern Partnership (EaP) countries have less access to external financing, business training and affordable and qualified business support than their male counterparts. While the free trade agreements, DCFTAs, between the EU and Ukraine, Georgia, and Moldova, respectively, have the potential to bring long-term benefits especially for women, these will only be realized if the DCFTAs are fully implemented and gender inequalities are simultaneously addressed. Women constitute a large percentage of the employees in the areas that are the most likely to benefit from the DCFTAs, but stand the risk of being held back by societal attitudes and gender stereotypes. In order to better evaluate and study how these issues develop, gendered-segregated data need to be made available to academics, professionals and the general public.

Conclusion

Looking back 25 years, given the stakes involved, things could have gotten much worse. Even so, for the CIS countries progress has been uneven and disappointing and many of the countries are still struggling with the same challenges they faced in the 1990’s: weak institutions, slow productivity growth, corruption and state capture. Meanwhile, the current migration situation in Europe has revealed that even the institutional development towards democracy, free press and judicial independence in several of the CEEC countries cannot be taken for granted. The transition process is thus far from complete, and the lessons from the economics of transition literature are still highly relevant.

Participants at the conference

  • Irina Alkhovka, Gender Perspectives.
  • Bas Bakker, IMF.
  • Torbjörn Becker, SITE.
  • Erik Berglöf, Institute of Global Affairs, LSE.
  • Kateryna Bornukova, Belarusian Research and Outreach Center.
  • Anne Boschini, Stockholm University.
  • Irina Denisova, New Economic School.
  • Stefan Gullgren, Ministry for Foreign Affairs.
  • Elsa Håstad, Sida.
  • Eric Livny, International School of Economics.
  • Michal Myck, Centre for Economic Analysis.
  • Tymofiy Mylovanov, Kyiv School of Economics.
  • Olena Nizalova, University of Kent.
  • Heinz Sjögren, Swedish Chamber of Commerce for Russia and CIS.
  • Andrea Spear, Independent consultant.
  • Oscar Stenström, Ministry for Foreign Affairs.
  • Natalya Volchkova, Centre for Economic and Financial Research.

 

Culture and Interstate Dispute

FREE Network Policy Brief Image

The debate on the impact of culture on the conduct of international affairs, in particular on conflict proneness, continues. Yet, the question of whether markers of identity influence conflicts between states is still subject to disputes, and the empirical evidence on Huntington’s clash of civilizations thesis is ambiguous. This policy brief summarizes a recent study where we employ an array of measures of cultural distance between states, including time-varying and continuous variables, and run a battery of alternative empirical models. Regardless of how we operationalize cultural distance and the empirical specification, our models consistently show that conflict is more likely between culturally distant countries.

In his controversial “The Clash of Civilizations” thesis, Samuel Huntington argues that cultural identity is to become the principal focus of individual allegiance and could ultimately lead to an increasing number of clashes between states, regardless of political incentives and constraints. In the post-Cold War world in particular, Huntington (1993) argues that the main source of conflict will not be ideological, political or economic differences but rather cultural. In other words, fundamental differences between the largest blocks of cultural groups – the so-called “civilizations” – will increase the likelihood of conflict along the cultural fault lines separating these groups.

According to Huntington (1996, p.41), a civilization is “the highest cultural grouping of people and the broadest level of cultural identity people have.” Huntington argues that the world could be divided into discrete macro-cultural areas: the Western, Latin American, Confucian (Sinic), Islamic, Slavic-Orthodox, Hindu, Japanese, Buddhist, and a “possible African” civilizations. As the list makes clear, the central defining characteristic of a civilization is religion, and in fact, conflicts between civilizations are mostly between peoples of different religions, while language is a secondary distinguishing factor (Huntington, 1996).

This brief summarizes the findings of our paper (Bove and Gokmen, forthcoming), which offers an empirical analysis of the relationship between identity and interstate disputes by including measures of cultural distance in the benchmark empirical models of the likelihood of militarized interstate disputes. By moving beyond simple indicators of common religion or similar language, our findings suggest that conflict is more likely between culturally distant countries. For example, the average marginal effect of international language barrier on the probability of conflict relative to the average probability of conflict is around 65%. Overall, we find that the average marginal impact of cultural distance on the likelihood of conflict relative to the average probability of conflict is in the range of 10% to 129%.

Measuring cultural distance

To effectively capture cross-cultural variations between states, we employ five different indexes along linguistic and cultural distances. First, to capture the linguistic distance between two countries, we use the language barrier index (Lohmann, 2011). It ranges between 0 and 1 where 0 means no language barrier, i.e. the two languages are basically identical, and 1 means that the two languages have no features in common (e.g., Tonga-Bangladesh). Since more than one language is spoken in some countries, we employ two alternative indexes: the basic language barrier, which uses the main official languages, and the international language barrier, which uses the most widely spoken world languages.

Second, we adopt Kogut and Singh’s (1988) standardized measure of cultural differences, as well as an improved version provided by Kandogan (2012). Although the degree of cultural differences is notably difficult to conceptualize, Kogut and Singh (1988) offer a simple and standardized measure of cultural distance, which is based on Hofstede’s (1980) dimensions of national culture. In particular, Kogut & Singh (1988) develop a measure of “cultural distance” (CD) as a composite index based on the deviation from each of Hofstede’s (1980) four national culture scales: power distance, uncertainty avoidance, masculinity/femininity, and individualism.

These dimensions of culture are rooted in people’s values, where values are “broad preferences for one state of affairs over others […]; they are opinions on how things are and they also affect our behavior” (Hofstede, 1985). As such, by explicitly taking into account the values held by the majority of the population in each of the surveyed countries, these dimensions can effectively capture differences in countries’ norms, perceptions, and ways to deal with conflicting situations. Higher cultural distance pertains to higher divergence in opinions, norms, or values.

Third, to cross-validate our empirical findings on cultural distance and to duly take into account societal dynamics and changes in the composition of societies, we use another popular quantitative measure of cultural distance based on The World Values Surveys (WVS). From 1998 to 2006, we use the composite value of two dimensions of values, traditional vs. secular-rational values and survival vs. self-expression values, which account for more than 70% of the cross- cultural variance (Inglehart and Welzel, 2005). The traditional vs. secular-rational values dimension captures the difference between societies in which religion is very important or not. The second dimension is linked to the transition from industrial society to post-industrial societies. Societies near the self-expression pole tend to prioritize wellbeing and the quality of life issues, such as women’s emancipation and equal status for racial and sexual minorities, over economic and physical security. Broadly speaking, members of the societies in which individuals focus more on survival find foreigners and outsiders, ethnic diversity, and cultural change to be threatening.

Impact of culture on militarized interstate dispute

We estimate the benchmark model of Martin et al. (2008), which uses a large data set of military conflicts in 1950-2000. We choose this model over other alternatives as it possibly has the most exhaustive list of controls that can potentially affect the probability of militarized interstate disputes (MIDs). We assess the impact of our cultural distance measures on conflict. All five measures of cultural distance have a positive effect on conflict involvement. In other words, culturally more distant states fight more on average. In column (i) of Table 1, we see that Language Barrier positively affects conflict, although insignificant. When we take into account International Language Barrier in column (ii), however, it has a positive and significant effect on conflict involvement. This should not come as a surprise as the part of the culture of a country that is reflected in a language should be more related to the spoken languages rather than the official ones.

To assess the magnitude of the effects, we calculate for each model the standardized marginal effect as the average marginal effect of a cultural distance variable on the probability of conflict relative to the average probability of conflict, which is about 0.0066. This effect is sizeable for International Language Barrier and is around 65%. When we use the Cultural Distance (Kogut) measure, instead, the results are qualitatively similar. The standardized marginal effect, however, is reduced and is now about 14%. The standardized marginal effect of Cultural Distance (Kandogan) on conflict probability is similar at 11. The effect of Cultural Distance (WVS) is also positive and significant. However, the large standardized marginal effect should be interpreted with caution, as the number of countries that are in the WVS is limited due to data availability. All the results from our cultural distance measures considered together, evidence suggests that cultural distance increases the likelihood of interstate militarized conflict.

Table 1. Cultural distance and International conflict

Slide1Additionally, in Figure 1, holding all other variables constant, we see a 25% and 19% increase in the odds of conflict for a one-unit increase in Cultural Distance (Kogut) and Cultural Distance (Kandogan) variables, respectively; while the same increase in Language Barrier raises the odds of conflict by 52%.

Figure 1. Odds ratio of coefficients in Table 1

Figure1Note: Cultural distance (WVS) is scaled down by 100 for the sake of readability.

Discussion and conclusion

Samuel Huntington’s thesis on the “Clash of Civilizations” is one of the most fascinating and debated issues in the field of international relations, and has sparked a long-lasting debate about its validity among academics, practitioners and policy-makers. The scholarly literature on international studies has long grappled with how to define, characterize, and analyze his thesis. Although some of the seminal works provided little support to Huntington’s thesis, later studies seemed to partially confirm it. While most of these studies use Huntington’s measure of the concept of civilizations, his classification was tentative, imprecise and difficult to operationalize. Moreover, previous studies rely on a “dichotomization” of civilizations, which is a continuous concept, and treat it as an immutable object, while it is certainly subject to variation over time.

Political events in recent years, such as the NATO-Russia confrontation over Ukraine, Russia’s attempts to resurrect its cultural and political dominance in the former Soviet sphere, the unprecedented rise of Islamic extremism in the Middle East, the foundation of an organization like ISIS with a declared aim to build a Muslim caliphate and wage war on Western civilization, or the rise of independence and anti-EU movements in Europe, have been attributed by many political observers to cultural clashes. We argue that whether and how identity impacts the likelihood of MID hinges crucially on the definition and operationalization of “civilizations” or cultural similarity.

We therefore introduce a number of ad-hoc measures of cultural distance in the benchmark empirical models on the likelihood of MIDs. Regardless of how we deal with the definition of cultural distance, the empirical evidence points consistently towards the importance of cultural distance in explaining the odds of interstate conflict. Although the extent of evidence for an effect of cultural distance on conflict clearly depends on model specification and data considerations, in particular the size of the effect, our results suggest that conflict is more likely between culturally distant countries.

Our study highlights the importance of the awareness of the impact of culture in international relations. Culture can be an important determinant of foreign policy as pronounced differences in social norms and behaviors of collective groups might create frictions between states and shape the way they interact. Thus, educating people in cross-cultural sensitivity should be a policy priority. That is to say that the knowledge and acceptance of other cultures are important to avoid tensions and potential conflicts.

References

  • Huntington, Samuel P. 1993. “The clash of civilizations? Foreign affairs”, 22–49.
  • Huntington, Samuel P. 1996. “The clash of civilizations and the remaking of world order”. Penguin Books India.
  • Inglehart, Ronald, & Welzel, Christian. 2005. “Modernization, cultural change, and democracy: The human development sequence.” Cambridge University Press.
  • Kandogan, Yener. 2012. “An improvement to Kogut and Singh measure of cultural distance considering the relationship among different dimensions of culture.” Research in International Business and Finance, 26(2), 196–203.
  • Kogut, Bruce, & Singh, Harbir. 1988. “The Effect of National Culture on the Choice of Entry Mode.” Journal of International Business Studies, 19(3), 411– 432.
  • Lohmann, Johannes. 2011. “Do language barriers affect trade?” Economics Letters, 110(2), 159–162.
  • Martin, Philippe, Mayer, Thierry, & Thoenig, Mathias. 2008. “Make trade not war?” The Review of Economic Studies, 75(3), 865–900.

Why Do Scientists Move? The Mobility of Scientists

Image of a man walking inside the airport with his luggage representing a mobility of Scientists

This policy brief provides an overview of new evidence on the determinants of the mobility of scientists – high human capital workers who generate new ideas and expand the frontier of knowledge. New evidence from a large dataset of elite US life scientists shows that professional factors, including individual productivity and the quality of a scientist’s peer environment, matter for mobility. Strikingly, family structure also plays a significant role, with the likelihood of moving to decrease when a scientist’s children are in high school (14-17 years old). This suggests that even “star” scientists take into account more personal, family factors in their mobility decisions, likely due to the costs associated with disrupting their children’s social networks.

Workers often face an important decision during their career: should I move to a new city for a new job? Relocation is a complex decision that can involve numerous professional and personal factors, and some of these factors can constrain moves while others facilitate them. Relocating can, on one hand, mean significant transition costs in terms of uprooting one’s family and navigating a new city and workplace; on the other hand, it can open up new career opportunities and provide environments where one’s skills are put to better use.

Why should we care about whether and why workers move? Economic theory suggests that mobility is one channel through which worker productivity can be increased by improving the employer-employee “match”. Moreover, particularly for highly-skilled workers, the theory suggests that the mobility of workers can impact the productivity of their peers; if the human capital of the mobile worker “spills over” to their peers, then the peers left behind would experience a decline in productivity and those at the new destination would get a boost.

In light of this, understanding the mobility of scientists – high human capital workers who are generating new ideas and expanding the frontier of knowledge – is of particular importance when considering the potential role that mobility can play in increasing productivity and innovation, which are central to models of economic growth.

The Determinants of Mobility

While there is a growing literature trying to document how the mobility of scientists can impact their own productivity and the productivity of their peers (see e.g. Agrawal, McHale, & Oettl, 2014), a significant challenge is finding plausibly exogenous variation in both the timing and location choices of movers. In order to fully understand the impacts of mobility, we first need to know more about the determinant of mobility: why and when in their careers scientists move.

Several studies have examined the determinants of the mobility of scientists and inventors, but the literature has been hampered by lack of data that allow researchers to observe the relevant factors that may matter for mobility. These studies have tended to focus on professional factors for moves; especially how individual productivity measures, like a scientist’s number of publications, citations and patents, predict moves. Importantly, there has been less attention in this literature on the constraints to mobility, including more personal factors, like the role of children and family, and the quality of one’s peer environment.

The findings from these studies on the role of individual productivity for mobility is mixed with evidence pointing to a positive relationship (Zucker, Darby and Torero, 2002; Lenzi, 2009), and some evidence showing negative (Hoisl, 2007) and/or no effects (Crespi et al, 2007). One key professional factor that has remained underexplored in these studies is the quality of the peer environment, or how one’s colleagues can influence the choice of moving.

Moreover, very few studies have been able to examine non-professional factors such as the role of family and children. There is some evidence on family factors and inventor mobility from Sweden, where detailed data is widely available but within country mobility is relatively low (Ejermo and Ahlin, 2014). There is also some evidence from the sociology of science literature showing that children influence the scientific performance and mobility of scientists. Using data from the U.S. Census, Shauman and Xie (1996) find that children tend to constrain mobility; for women, children negatively impact mobility regardless of the age of the children, while for men, it is older, high school-age children that tend to constrain mobility. However, given that the study uses Census data, individual productivity measures are limited, which are needed to compare the effects for similarly accomplished scientists.

Evidence from Elite Life Scientists in the US

In Azoulay, Ganguli and Graff Zivin (2016) we examine the determinants of mobility of elite life scientists in the U.S. and for the first time, provide evidence on both professional and personal determinants of mobility. We use a unique panel dataset we compiled from the career histories of 10,004 elite life scientists to understand why and when scientists make decisions to move to new locations. We are able to observe the transitions scientists make between institutions, and focus on moves that are at least 50 miles apart (based on distance between the zip codes of the institutions) to increase the likelihood that a transition leads the scientists to change their place of residence.

The dataset includes individual productivity measures of publication counts and the U.S. National Institutes of Health (NIH) funding data. We also measure the quality of the peer environment at the scientists’ origin and destination locations using publication and funding counts of peers. We define peers as both collaborating and non-collaborating peers (those who are close in “idea space”), and those who are geographically close (less than 50 miles apart) and those who are distant (more than 50 miles apart). Finally, to examine the personal factors, for each scientist in our sample, we hand-collected information on their children, including each child’s year of birth.

Through regression analysis, we find that individual productivity is a positive predictor of moves, which is consistent with several other studies (Zucker, Darby and Torero, 2002; Coupé, Smeets & Warzynski, 2006; Lenzi, 2009; Ganguli, 2015). We also provide new evidence on additional professional factors that influence the propensity to move. For example, we find that obtaining recent NIH funding deters moves, perhaps as a result of transaction costs associated with transferring federally funded research between institutions. We also find that a scientists’ peer environment is a significant predictor of mobility, as scientists are less likely to move when the quality of the peer environment near their home institution is high and more likely to move when the quality of the peer environment at distant institutions is high.

Figure 1. Age of Children and Mobility: Age of Oldest Child

Ina1

Figure 2. Age of Children and Mobility: Age of Youngest Child

Ina2

Our most striking result is the role that family structure plays for mobility. We find a significant decrease in mobility when scientists’ children are of high school age. The likelihood of moving increases just before their oldest child enters high school, and again when their youngest child is beyond high school age. Figure 1 shows a notable spike in distant moves just before the oldest child in the household enters high school (11-14 years old), while Figure 2 shows a similar spike after the youngest child completes high school (18-20 years old). In both figures, the relationship between age of children and local (less than 50 miles) moves does not show a similar spike. This relationship between mobility and age of children persists in regressions that allow us to control for productivity measures and potential confounders.

Conclusion

This brief has discussed theory and evidence related to scientific mobility. New evidence from a large dataset of elite life scientists shows that while professional factors do matter for mobility, we also find that even “star” scientists take into account more personal, family factors in their mobility decisions, likely due to potential disruptions to the social networks of their children.

Given that there is still little evidence about what drives relocation decisions, it is important for further analysis of these issues, and our study raises several more questions for researchers to examine, many of which have important policy implications. For example, what is it about recent NIH grants that deter mobility – the terms of the grant contracts or costs of moving personnel and equipment? Regarding the family factors, we were unable to look at differences among female and male scientists, but an important question for further research is whether the age of children and other factors appear to affect women’s and men’s relocation decisions differently.

References

Disclaimer: Opinions expressed in policy briefs and other publications are those of the authors; they do not necessarily reflect those of the FREE Network and its research institutes.

Time to Worry about Illiquidity

At a time when central banks have injected unprecedented amounts of money, worrying about illiquidity may appear odd. However, if poorly understood and unaddressed, illiquidity could be the foundation of the next financial crisis. Market liquidity is defined as the ease of trading a financial security quickly, efficiently and in reasonable volume without affecting market prices. While researchers find that it has been positively correlated with central bank’s liquidity injection, it may no longer be the case. The combination of tightly regulated banks, loosely regulated asset managers, and zero (or negative) policy rates could prove toxic.

One recent volatile day on the markets, an investor called her bank manager asking to convert a reasonably small amount of foreign currency. The sales person was quick to respond: “I will hang up now and we will pretend this call never happened”. In other words, the bank was not ready to quote her any price. The typical academic measures of market liquidity, such as bid-offer spreads, remained tranquil on Bloomberg, there was no transactions taking place.

When the investor was finally forced to exchange, the result was messy: currency price gapped—fell discontinuously—causing alarm among other market participants and policymakers. All that due to a transaction of roughly $500,000 in one of the top emerging market currencies in the world according to the BIS Triennial Central Bank Survey at an inopportune moment.

Markets becoming less liquid

Post crisis, G-7 central banks have embarked on unconventional monetary policy measures to boost liquidity and ease monetary policy at the zero-lower-bound, while tightening bank regulation and supervision. On net, however, the ability to transact in key financial assets in adequate volumes without affecting the price has fallen across a range of markets, including the foreign exchange markets that are traditionally assumed to be the most liquid compared to bonds, other fixed income instruments and equities.

Financial market participants have reported a worsening of liquidity, particularly during periods of stress. Event studies include the 2013 “taper tantrum” episode, where emerging markets’ financial assets experienced substantial volatility and liquidity gapping that did not appear justified by the Fed’s signal to reduce marginally its degree of monetary policy accommodation, as well as the recent shocks to the US Treasury market (October 2014) and Bunds (early 2015).

Banks are retreating

Market-makers (international “sell-side” or investment banks as in the introducing example), which used to play the role of intermediators among buyers and sellers of financial assets, are now increasingly limiting their activities to a few selected liquid assets, priority geographies and clients, thus leading to a fragmentation of liquidity. Market-makers have also been reducing asset holdings on their balance sheets in a drive to reduce risk-weighted-assets, improve capital adequacy and curb proprietary trading. As a result, they are less willing to transact in adequate volumes with clients.

In the past, leverage by banks has been associated with higher provision of market liquidity. Loose regulation and expansionary monetary policy has been conducive to higher leverage by banks pre-2008. It is therefore puzzling that, now, at the time of unconventionally large monetary expansions by central banks, sell-side banks are unwilling to provide market liquidity. The answer may lay in tighter bank capital and liquidity regulation as more stringent definitions of market manipulation. Risk aversion by banks has also become harsher, a trader stands to lose a job and little to gain on a $2 million swing in her daily profit and loss, while in the past a swing of $20 million at a same bank would have hardly warranted a telling-off. Banks have become safer, but can that also be said about the financial system?

Asset managers growing in importance

Ultra-accommodative and unconventional monetary policies have compressed interest rates across all maturities. In a world where US Treasuries at two-year maturity do not even yield 1%, and Bunds are yielding negative rates even beyond 5 years, investors in search for yield are looking at longer (and less liquid) maturities and riskier assets. If banks are unable to meet this demand, others will: assets under management (AUM) by non-bank financial institutions, specifically real asset managers have expanded dramatically in recent years. Total size of top 400 asset managers’ AUM was EUR50 trillion in 2015, compared to EUR35 trillion in 2011 according to IPE research, with the largest individual asset manager in excess of EUR4 trillion. A fundamental problem arises when such asset managers are lightly regulated and very often have similar investment strategies and portfolios.

In the industry jargon, these asset managers are called long-only or real-money. Why the funny names? Long-only means they cannot short financial assets, as opposed to hedge funds. For every $100 collected from a range of individual investors’ savings via mutual funds, pension and insurance fund contributions, a small share (say 5%) is set aside as a liquidity buffer and the rest is invested in risky assets. Real money refers to the fact that these managers should not be levered. However, that is true only in principle as leverage is related to volatility.

Performance of real-money asset managers is assessed against benchmark portfolios. For emerging markets, the portfolio would typically be a selection of government bonds according a range of criteria, including size of outstanding debt, ease of access by international investors, liquidity, and standardization of bond contracts. Investors more often than not do not hedge foreign currency exposure. The benchmark for emerging markets sovereigns could have 10% allocated to Brazil, 10% to Malaysia, 10% to Poland and 5% to Russia, for example. India, on the contrary, would be excluded, as it does not allow foreign investors easy access to government bonds.

Benchmarks and illiquidity dull investor acumen

Widespread use of benchmarks among institutional asset managers can steer the whole market to position in “one-way” or herding, contributing to illiquidity and moral hazard risks. Benchmarks by construction reward profligate countries with large and high-yielding stocks of government debt.

While each individual portfolio manager may recognize the riskiness of highly-indebted sovereigns, benchmarking makes optimal to hold debt by Venezuela, Ukraine or Brazil as each year of missed performance (before default) is a risk of being fired, while if the whole industry is caught performing poorly, it is likely that the benchmark is down by as much.

Furthermore, real-money asset managers have become disproportionally large relatively to the capacity of sell-side banks (brokers) to provide trading liquidity. In fact some positions have de-facto become too large-to-trade. Even a medium-sized asset manager of no more than $200bn under management (industry leaders have $2-$4 trillion AUM) that attempts to reduce holdings of Ukraine, Venezuela or Brazil at the signs of trouble, is likely to trigger a disproportionate move in the asset price. This further reduces incentives to diligently assess each individual investment. In such environment, risk management has become highly complex, stop losses may no longer be as effective, while more stringent cash ratios would put an individual asset manager at a disadvantage to others.

Conclusion

Anecdotal and survey-based measures from the market demonstrate that liquidity is scarcer and less resilient during risk-off episodes. While regulation has made banks stronger, it may have rendered the financial system less stable. Lightly regulated real asset managers are increasing assets under management, are often positioned “one-way” and are becoming too-large-to-trade.

Nonetheless, systemic risk stemming from illiquidity in the new structure of the market remains little researched and poorly understood by policymakers and academics. Most models of the monetary transmission mechanism and exchange rate management do not incorporate complexities of market liquidity.

While regulatory changes have been largely driven by policy makers in the developed markets (naturally since they were at the epicenter of the global financial crisis), it is the emerging markets that in my view are most at risk. They tend to have less developed and less liquid domestic financial markets, and be even more prone to liquidity gaps with higher risks of negative financial sector-real economy feedback loops.

References

  • Sahay, R., et.al., “Emerging Market Volatility: Lessons from the Taper Tantrum”, IMF SDN/14/09, 2014 http://www.imf.org/external/pubs/ft/sdn/2014/sdn1409.pdf
  • Shek, J., Shim, I. and Hyun Song Shin, (2015), “Investor redemptions and fund manager sales of emerging market bonds: how are they related?” BIS Working Paper No. 509, http://www.bis.org/publ/work509.pdf
  • “Market-making and proprietary trading: industry trends, drivers and policy implications”, Committee on the Global Financial System, CGFS Papers, no 52, November 2014. www.bis.org/publ/cgfs52.pdf
  • “Fixed income market liquidity”, Committee on the Global Financial System, CGFS Papers, no 55, January 2016. www.bis.org/publ/cgfs55.pdf
  • Hyun Song Shin, “Perspectives 2016: Liquidity Policy and Practice” Conference, AQR Asset Management Institute, London Business School, 27 April, 2016. https://www.bis.org/speeches/sp160506.htm
  • Fender, I. and Lewrick, U. “Shifting tides – market liquidity and market making in fixed income instruments”, BIS Quarterly Review, March 2015. www.bis.org/publ/qtrpdf/r_qt1503i.htm
  • Tobias Adrian, Michael Fleming, and Ernst Schaumburg, “Introduction to a Series on Market Liquidity”, Liberty Street Economics, Federal Reserve Bank of New York, August, 2015.