«Citation for published item: Andreou, P. C., Louca, C. and Savva, C.S. Short-horizon event study estimation with a STAR model and real contaminated ...»
is captured in the γ estimate of Eq. (3) and Eq. (8) is replaced by:
Mean-Variance two-state market model test (MV-TSMM) The MV-TSMM test has been proposed by Aktas et al. (2007a) and utilizes the restricted version of Eq. (3) where only the variance is assumed to be state dependent (hereafter VTSMM). The idea of this test is based on the fact that the presence of contaminated events within the estimation window has an impact on the (ex-post) estimation of the abnormal return variance forcing in a way traditional tests to overestimate the variance of the residuals during the estimation period. To deal with this bias, the V-TSMM test relies on the Markov switching regression framework developed by Hamilton (1989, 1994) and as in the previous GARCH-test, the standard error of the γ estimate is used to standardize the AR as in Eq. (14).
Prior empirical research has revealed a significant time-variation in the slope parameter which depends on market conditions (Hays and Upton 1986; Klein and Rosenfeld 1987;
Chang and Weiss 1991; Chiang et al. 2013). Therefore, neglecting the state effects in the mean equation may lead to misleading inferences. To avoid this problem, we suggest using the unrestricted version of Eq. (3) and proceed using the same steps as before.12 We name this test ZMV-TSMM.
STAR test Finally, the STAR test statistic, utilizes Eq. (6) were mean and variance are assumed to be state dependent.13 These states change according to the behaviour of the transition variable (in our case of the one-period lagged returns) filtering out firm-specific contaminating events that could otherwise influence the mean and variance in the model’s estimations.14 This model shares similar benefits to the rest of the regime specifications and has the advantage that it endogenously determines (and quantifies) the level of change from one regime to the other.
Estimation is based on the Maximum-likelihood method using the MSVAR library in the GAUSS software.
Programming code for the STAR event study model is freely available from the authors’ websites.
The one-period lagged return was proved to be the most appropriate transition variable for more than 90% of the firms in our sample (based on the LM-type test).
As before, standard error of the γ estimate is used to standardize the AR as in Eq. (14).
2.2. Data and sample construction 2.2.1. Simulated return generating processes To generate the theoretical event-free return-generating process, we consider all stocks reported in the CRSP daily returns file from January 1980 to December 2010.
Contaminating-events in each stock time-series are generated by random sampling from a uniform distribution, whereas for each stock we safeguard that the resulting return time-series (estimation period and event window) does not belong in the M&As data set used in this study. The estimation window is going from -255 to -30 days relative to the event date for the traditional tests estimated using the market model following Eq. (1). For the set of tests employing the dummy-based market model following Eq. (2), estimation is done from -255 to +5 days relative to the event date. Following Fuller et al. (2002), we choose in all tests to measure cumulative abnormal returns (CARs) in the 11-day window [-5,+5] around the event announcement date. All stocks and event dates were randomly chosen with replacement such that each stock/date combination had an equal chance of being chosen at each selection. In the spirit of previous studies (e.g., Savickas et al. 2003; Aktas et al. 2007a; Harrington and Shrider 2007), we exclude securities with missing information on the event day and securities with less than 100 nonzero returns over the estimation window and no missing prices in the 11-day window surrounding the event-announcement. The latter treatment is to avoid observations where the security had recently been added to the CRSP and to limit stocks that are not actively traded in the market. Following the norm in similar studies, for each replication we construct 1000 samples of 50 stocks.15 We investigate the specification and power of the test statistics by deliberately contaminating the data in the estimation window. We simulate significant events by introducing abnormal returns (of random sign) into the estimation window on randomly selected points in time which are twice the standard deviation of the actual stock. To generate stochastic shocks, we follow the method proposed of Brown and Warner (1985) by adding another two demeaned returns randomly drawn from the estimation window. The sign of the simulated abnormal return is determined by random sampling from a Bernoulli distribution.
Allowing either a positive or a negative sign for the abnormal return merely reflects the unknown type of the events that may emerge in the estimation window. In statistical terms,
where R j,t is the actual return, R j, X and R j,Y are the returns randomly selected from the estimation period and R j and RJ are the mean return and standard deviation in the estimation window.
The number and nature of the events during the estimation window is determined in two steps. First, a random sample is drawn from a Poisson distribution with a mean of 2 which captures the number of events during the estimation window. Events were then randomly assigned to specific days in the estimation window by random sampling from a We choose to report results using a portfolio size of 50 stocks to maintain conformity with notable previous studies (e.g., Brown and Warner 1980; Savickas 2003; Aktas et al. 2007a; Harrington and Shrider 2007; Kolari and Pynnönen 2010 etc). Nevertheless, some recent studies such as the ones by Ahern (2009) and Campbell et al. (2010) simulate larger stock portfolio sizes. We have repeated the whole analysis with 1000 samples of either 100 or 250 stocks each to find that our results/inferences remain unchanged.
uniform distribution. Second, the length (in number of days) of each event was again randomly sampled from a Poisson distribution, this time with a mean of 4.
On the simulated event day, the abnormal performance is 0% for the specification analysis and +1% for the power analysis.16 To capture the event-induced increase in return variance, similarly to Aktas et al. (2007a) each stock’s day 0 return, Rj,0, was transformed to triple its variance by adding 2 demeaned returns randomly drawn from the estimation window. The event-day transformed return was again following the nature of Eq. (17). In all cases, our market portfolio is taken to be the CRSP value weighted index.17 2.2.2. Real data return generating processes This paper also strives to empirically validate the robustness of each event study approach to the return-generating mechanism by detecting abnormal performance in real data.
It is intriguing from a practical perspective to investigate whether the results from simulations are also obtained when dealing with a real sample of corporate event announcements such as M&As. Therefore, we depart from the previous literature and instead of simulating abnormal returns to deliberately contaminate the estimation window with some significant events we randomly choose stock return time-series from a sample of M&A deals.18 In this manner, contaminated (unrelated) events due to other corporate actions that precede the acquisition announcement emerge naturally in the estimation window. This treatment allows us to be more realistic with respect to the characteristics of the contaminating events, which instead of We reach qualitatively similar results for any other abnormal performance above 1%.
All results are robust when we instead use the CRSP equally weighted index.
The samples for the tests are completed as follows. For the 10% rejection rates: to construct the 1000 portfolio samples, each 50-firm sample is formed by randomly picking 45 firms from the universe of CRSP stocks (eventfree sample) and 5 firms from the M&As data set (contaminated sample). Likewise, for the 5% rejection rates, in the first (second) 500 samples we randomly pick 3 (2) firms from the M&As data set and 47 (48) firms from the universe of CRSP stocks. For the 1% rejection rates, in the first (second) 500 samples we randomly pick 1 (0) firms from the M&As data set and 49 (50) firms from the universe of CRSP stocks.
being artificially generated using some pre-determined nuisance distributional parameters, are taken from stock returns of firms that undergo a real corporate action.
The analysis includes all merger and acquisition announcements involving U.S.
targets and taking place in the period 19802010, extracted from the Securities Data Corporation (SDC) database. We require that firms are listed in NYSE, AMEX or NASDAQ with available CRSP data, the outcome of the deal is known (either completed or withdrawn), and the deal value is over 1 million USD. We exclude deals whose value represents less than 1% of the bidder’s market capitalization. The final sample includes 4421 bidders and 5928 targets which constitute the universe of our real stock returns data.
Our research design with the real data closely follows the one we implement with the simulated stock returns. The aim of this analysis yet, is to validate the power of the event study when the estimation window is naturally contaminated by disruptions in the normal return-generating process, of the size and amplitude we might expect from various real-world
contaminating-events are assumed to inherently exist in the estimation window. Estimation and event window lengths are taken to be the same as with the simulated paradigm. To be able to study the rejection rates of the test statistics under the no event-induced increase in variance case, we enforce a neutral abnormal performance of 0% on the real event date by demeaning the real stock returns in the 11-day period [-5,+5] around the event date.19 In this way the actual event market reaction is neutralized and the event window with actual data has similar behaviour to the simulated data. Thereafter, to capture the event-induced increase in variance phenomenon each security's day 0 return, Rj,0, was transformed to triple its variance by adding 2 demeaned returns randomly drawn from the estimation window. The event-day Unreported findings suggest that our results are robust to longer or shorter event windows (e.g. [-20, +20] and [-1, +1].
transformed return was again following the nature of Eq. (17). Finally, to study the power analysis of each method, we introduce an 1% abnormal return on the real event date.
3. Discussion of results
3.1. Simulated contaminated events Following previous studies, we first provide analytical and empirical evidence of the resulting biases using randomly selected firms with artificially induced abnormal events during the estimation period. Our results are presented in Tables 1 to 4, which show rejection and power rates. In particular, Tables 1 and 3 provide analysis on specification tests (Type I errors) under the null hypothesis of no event effect on the abnormal returns, while Tables 2 and 4 report power tests (Type II errors) under the alternative hypothesis of nonzero mean abnormal returns. Tables 3 and 4 capture event-induced increase in return variance generated by stochastic shocks to comfort to recent theoretical and empirical evidence that documents that all events induce variance (see, Harrington and Shrider 2007). Nevertheless, to facilitate comparisons with prior literature, Tables 1 and 2 report specification and power analysis under the no event-induced increase in variance case. Results are presented for the BMP, BETA-1, RANK, STAR, GARCH, V-TSMM and MV-TSMM tests.
Prior to examining the performance of various statistical tests of significance, we investigate the in-sample performance of the models used in this study. We rely on two widely used loss functions, namely the mean square error (MSE) and mean absolute error (MAE). In particular, we find that the STAR model specification delivers the overall best results with MSE (MAE) equal to 0.98 (0.77), followed by MV-TSMM with loss values of 1.00 (0.81) and V-TSMM with 1.02 (0.82). The rest two model specifications, namely the GARCH and market model specifications deliver the worst fitting performance results with much higher loss values. The in-sample performance of the alternative market models provides empirical support for the application of the STAR specification in the event study framework since it fits the returns data over the estimation window much better than any other rival method. Therefore, in-sample modelling performance of empirical stock returns generating process renders the use of the STAR method more desirable.
3.1.1. Tests with no change in return variance Table 1 presents rejection rates for test statistics when an event creates no abnormal returns and no increase in variance in the event window. Panel A of Table 1 shows that if contaminating events are not present in the estimation window, all tests perform relatively well with BMP and TSMM to look the most attractive ones. Similar results are obtained in the case of contaminating events in the estimation window since as it can be seen in Panel B rejection rates are similar to the ones presented in Panel A.
Table 2 presents power analysis in the case where an event creates an increase in the event window returns but no increase in variance. Panel A provides the analysis when the estimation window is not contaminated while in Panel B the same analysis is presented in the case of contaminating events. In Panel A of Table 2 we observe that RANK is the most powerful test. The less powerful tests are the BETA-1 and GARCH. When the significance level is 1%, STAR and V-TSMM provide similar results, while in the case of 5% and 10% levels, we find similar results for STAR and MV-TSMM. In the presence of contaminating events as shown in Panel B of Table 2, BETA-1 and GARCH are still the less powerful tests.