«Citation for published item: Andreou, P. C., Louca, C. and Savva, C.S. Short-horizon event study estimation with a STAR model and real contaminated ...»
has largely overlooked the possibility that there may be significant aberrations between simulated and real events that could induce unduly variance in the estimation window that inevitably could impact the power of the test statistics.6 Therefore, the overall observed superiority of the STAR event study test statistic, especially under extreme market conditions, is evidence to support the use of this method in future empirical work relying on short-horizon event studies.
The paper is organized as follows. Section 2 presents our experimental design including the data we use, the simulation setup we follow regarding the event study returngenerating process and relevant test statistics that we investigate. Section 3 discusses the performance results from the simulated and the real stock-returns data. Section 4 concludes the study.
2. Methods and experimental design Following the seminal work by Brown and Warner (1985), previous researchers generate simulated random shocks in the estimation window in two steps. First, firms are randomly picked from the universe of stocks included in the Center for Research in Security Prices (CRSP) database. Second, an extraordinary event is simulated for each firm and randomly introduced in the real stock price series in the estimation window. Such a contaminated event is necessary to artificially induce variance in the estimation parameters of the market model.
Campbell et al. (2001) document a noticeable increase in firm-level volatility relative to market volatility over the period from 1962 to 1997 which is associated with a decline in the explanatory power of the market model (see also empirical evidence in Aktas et al. 2007b, as well as Arora et al. 2009 for emerging markets). Kothari and Warner (2007) note that this is highly relevant on the implication behind the event study because it suggests a time-variation to the power of test statistics to detect abnormal performance for certain events. Studies that rely on purely simulated variance-induced events may fail to properly capture such stylized (structured) patterns in the returns-generating mechanism.
Our empirical analysis utilizes two sources of data return-generating processes to investigate the specification and power of test statistics. First, we follow the traditional event study approach proposed by Brown and Warner (1980, 1985) to simulate abnormal performance and variance increase event days. Since then, a notable amount of other research studies have adopted this approach (see, for instance, Corrado 1989; Corrado and Zivney 1992; Boehmer et al. 1991; Savickas 2003; Aktas et al. 2009; and Kolari and Pynnönen 2010). The traditional approach relies solely on simulated data to construct the event-induced variance increase in the return-generating process. In addition, we follow the notion postulated by Aktas et al. (2007a) to contaminate the event study estimation window.
Second, we repeat all specification and power tests using a large sample of M&As to investigate the impact of estimation period contamination under the influence of unrelated events and of event-induced increase in return variance that may emerge naturally for firms that engage in this particular corporate activity. It is of vital importance to investigate whether assertions regarding the specification and power test statistics obtained with simulated contaminated returns also remain unchanged with real returns on which firmspecific events emerge naturally and reflect news arrival in a purely stochastic manner (i.e., in accordance with the efficient market hypothesis). M&A announcements reflect real-event contamination that naturally induce variance in the estimation window. In this respect, we inherently take into account both, the cross-sectional variation relating to the underlying economic effects of a real event, as well as any structure in heteroskedasticity arising from these events.7 Hence, by investigating the performance of test statistics using a large data set of M&As we avoid any (unrealistic) assumptions imposed when contamination is done in an artificial manner.
Both of these elements are deemed important since Harrington and Shrider (2007) identify them to be “troubling features” of the statistical tests reported in many prior studies. In addition, the real return-generating process would allow many different sorts of unrelated events to affect the estimation period, revealing which tests are robust when employed with non-simulated data.
2.1. Event study method 2.1.1. Return generating processes A classical approach for abnormal returns is based on the market model introduced by Sharpe (1963):
where, – αj and βj are the coefficient estimates for firm j;
– Rj and Rm are vectors of returns for firm j and for a market portfolio proxy m on day t, respectively; and the residuals are supposed to be independent and identically distributed and capture the abnormal behaviour.
An alternative way to obtain event-day predictions and standard errors is by employing the market model augmented by a dummy variable to capture the effects of the
event (see, Karafiath 1988; Salinger 1992):
where, – Dj is a dummy variable equal to 1 at the event window for firm j, and 0 otherwise;
The coefficient γ captures the abnormal return which in essence is the forecast error of what is expected to observe using a normal return-generating model compared to what is really observed during the event window. This model however, as shown by Aktas et al.
(2007a), is problematic and its OLS solution overestimates the standard error of an individual firm's abnormal returns when the true return-generating process has two-states (see also discussions in Salinger 1992). This is mainly attributed to the fact that the variance covariance matrix is rather a state dependent and no longer homoskedastic. Therefore they suggest a model that captures a low and high variance regime. Hence, building on Aktas et al.
(2007a), we incorporate regime dependent intercepts and slope coefficients in the mean
specification as follows:
where S is a state variable, with S = 1 for the low regime state and S = 2 for the high regime state. More specifically, parameters αj,S and βj,S allow to explicitly incorporate the presence of contaminated events into the mean specification of the model. Similar intuition applies for the variance where the high variance state is greater than the low variance state ( 2j,2 2j,1 ).8 As for the way the transition between the two regimes is governed, we follow methodology and notations as in Hamilton (1994). More specifically the transition between the two regimes is governed the by a Markov chain of order 1, for which the transition matrix
is given by:
where pkl=p(St = k|St-1 =l) corresponds to the probability of changing from state l to state k
with the unconditional probability of the regime given by:
In addition, we introduce a STAR model specification that has the capacity to capture the state dependent generating process of stock returns. This model can be thought as an extension of the autoregressive models allowing for changes in parameters according to the Salinger (1992) also discusses deficiencies of the traditional approach on the estimation of the abnormal returns variance when the market model parameters are not stable and which could lead to incorrect inferences about the detection of abnormal returns.
value of a transition variable. More specifically, the market STAR model specification can be
presented as follows:
where i = 1 for the low state of the market i = 2 for the high state of the market. To assess whether the effects on the returns vary with the state of the market, we employ a continuous transition function G zt, , c , which changes smoothly from 0 to 1, as the transition variable
zt increases. A popular choice is the logistic function:9
In practice the appropriate transition variable zt is unknown, however a good choice is to use lagged endogenous variables (in our case, Rj,t-1). This is also supported by the LM-type statistic (see Terasvirta 1994, for further details) which was conducted during the analysis and supports that the first lag of the dependent variable is the best choice. Therefore, at any given point in time the evolution of Rj,t is determined by a weighted average of two different regression models. The weights assigned to the two models depend on the value taken by the transition variable zt. For small (large) values of zt, G zt, , c is approximately equal to zero (one) and, hence, almost all weight is put on the first (second) part of the model.
The parameter c, denotes the threshold between the two regimes corresponding to G zt, , c =0 and G zt, , c =1, in the sense that the logistic function changes monotonically from 0 to 1 as zt increases.10 This logistic form has been widely used for smooth transition models. For further details we refer to Terasvirta and Anderson (1992), Terasvirta (1994), and van Dijk and Franses (1999).
The starting values of ζi and ci (with ζi 0) are determined by a grid search and are estimated in one step by maximizing the likelihood function while the threshold point between the states is estimated by the model.
The parameter ζ determines the speed at which the weights between the two parts of the specification change as zt increases; the higher ζ, the faster is this change. If ζ → 0, the weights become constant (equal to 0.5) and the model becomes linear, whereas, if ζ → ∞, the logistic function approaches a Heaviside function, taking the value of 0 for zt c or 1 for zt c.
Although the STAR and Markov specifications belong to the family of switching models, there is an important conceptual difference between them. As noted by Deschamps (2008), the STAR model incorporates strong prior knowledge on the factors determining the onset of transitions between regimes (through the transition variable), while in the Markov switching model, such prior knowledge only consists in a flexible evolution equation.
Therefore, the choice of an appropriate transition variable allows STAR to make better use of available information to deliver better results.
2.1.2. Statistical tests of significance During the past decades, many studies contributed test statistics to the area of the event study methodology. These tests are the BMP (Boehmer et al. 1991), BETA-1, RANK (Corrado 1989), GARCH (Savickas 2003), TSMM (Aktas et al. 2007a) and its mean return regime dependent extensions.
To introduce the different tests, we follow the notation used by Boehmer et al. (1991) and Aktas et al. (2007a): For each test, we consider the null hypothesis of no cross-sectional average (cumulative) abnormal returns around the event date.11
ARjE: abnormal return of firm j on the event date (following Eq. (2));
These tests are analyzed very briefly. For further information about the tests we refer the reader to the original contributions made by the authors of each test.
ARjt: abnormal return of firm j on date t;
T: number of days within the estimation period;
TE: number of days within the event period;
Rm : average return on the market portfolio during the estimation period;
Rm,E: market return on the event date
Ŝj: standard deviation of firm j's AR during the estimation period;
SRjE: standardized AR of firm j on the event date, calculated as:
BMP test The BMP test employs a cross-sectional approach that relies on the use of standardised abnormal returns and it was introduced to deal with event-induced increase in return variance.
BETA-1 test The BETA-1 test is a simplification of the BMP test (where the restrictions β = 1 and α = 0 transform Eq. (1) to the market-adjusted model). The test is based on cross-sectional estimates of the standard deviation of the event-day abnormal returns, ARE. The BETA-1 test
This test does not rely on estimating the unconditional expected return with stock returns data prior to the event window. Due to this feature, it has been employed by several empirical studies to investigate wealth effects of M&As (e.g., Fuller et al. 2002; Moeller et al. 2004) in an effort to isolate the event window abnormal returns from any unrelated events that could have been observed in the estimation window prior to the announcement.
RANK test This is a non-parametric test based on the ranks of abnormal returns proposed by Corrado in 1989 (see also Corrado and Zivney 1992). The RANK test merges the estimation and event windows in a single time series. Abnormal returns are sorted and a rank is assigned to each day. If Kjt is the rank assigned to firm j’s abnormal return on day t, then the RANK test is
The use of ranks neutralizes the impact of the shape of the abnormal returns distribution (e.g., its skewness and kurtosis and the presence of outliers). It should therefore represent an attractive alternative way of neutralizing contaminating events within the estimation window that may cause event-induced increase in variance and cross-correlation.
GARCH test This test assumes that the variance of the error term of Eq. (2) is a time varying process. By
adopting a GARCH approach Savickas (2003) suggested the use of the following returngenerating process:
where hj,t is the conditional time-varying variance and ωj, φj, θj and dj are the coefficients of a GARCH(1,1) specification. Due to its time-varying nature, the GARCH model has the ability to control for the time-varying variance of AR and the event-induced increase in return variance.
The conditional variance hj,t provides a natural estimator of the AR variance. Savickas (2003) used it to standardize the AR before proceeding with the BMP test. In this setting, AR