Sri Lankan Journal of Applied Statistics Latest Articleshttps://sljastats.sljol.info/articles/Latest articles published by Sri Lankan Journal of Applied Statisticsen-usWed, 17 Jul 2019 02:49:04 -0000An appraisal on some methods for estimating the 2-parameter weibull distribution with application to wind speeds samplehttps://sljastats.sljol.info/article/10.4038/sljastats.v18i3.8001Six<em> </em>methods for estimating the Weibull shape and scale parameters are considered and compared in this paper. These methods are: the least squares method, weighted least squares method, method of moments, energy pattern factor method, method of L-moments and the maximum likelihood method. A simulation study as well as application to a real data set (wind speeds sample) was used to test the performance of different methods using the smallest mean square error criterion. Results from the simulation study indicated that the maximum likelihood method is the most efficient method when dealing with large sample sizes, while the weighted least squares method, method of moments and the method of L-moments were quite efficient for small and moderate sample sizes. The maximum likelihood method produced the best method when all six methods were applied to a wind speeds sample by possessing the smallest mean square error. A very useful result obtained from the study is that the weighted least squares method which performed considerably well in estimating the Weibull parameters. This is a rare incidence in many studies. Published on 2017-12-31 00:00:00https://sljastats.sljol.info/article/10.4038/sljastats.v18i3.8001The transmuted geometric-inverse weibull distribution: properties, characterizations and applicationhttps://sljastats.sljol.info/article/10.4038/sljastats.v18i3.7959In this paper, a four parameters flexible life time distribution called the transmuted geometric-inverse Weibull (TG-IW) distribution is obtained from mixture of inverse Weibull distribution, geometric distribution and transmuted distribution. Some structural and mathematical properties including descriptive measures on the basis of quantiles, moments, factorial moments, incomplete moments, inequality measures, residual life functions and some other properties are theoretically taken up. The TG-IW distribution is characterized via different techniques. The estimates of parameters for the TG-IW distribution are being obtained from maximum likelihood method. The significance and flexibility of the TG-IW distribution is tested through different measures by application to physical data set. Published on 2017-12-31 00:00:00https://sljastats.sljol.info/article/10.4038/sljastats.v18i3.7959Fractional transportation problem with non-linear discount costhttps://sljastats.sljol.info/article/10.4038/sljastats.v18i3.7935The generalization of linear programming is a fractional programming where the objective function is a proportion of two linear functions. Likewise, in fractional transportation problem the aim is to optimize or improve the ratio of two cost functions or damage functions or demand functions. Since the ratio of two functions is considered, the fractional programming models become more appropriate for dealing with real life problems. The fractional transportation problem (FTP) plays a very important role in supply management for reducing cost and amending service. In real life, the parameters in the models are rarely known exactly and have to be evaluated. This paper investigates the fractional transportation problem (FTP) with some discount cost that avails during the shipment time. The transportation problem, which is one of integer programming problems, deals with distributing any commodity from any group of 'sources' to any group of destinations or 'sinks' in the most effective way with a given 'supply' and 'demand' constraints. The volume of goods to be transported from one place to another incurs some discount cost that could effectively reduce the shipment cost which is directly related to the profit associated with the shipment. This paper is aimed at studying the optimal solution for the problem has been achieved by using Karush-Kuhn-Tucker (KKT) optimality algorithm. Finally, a numerical example is illustrated to support the algorithm. Published on 2017-12-31 00:00:00https://sljastats.sljol.info/article/10.4038/sljastats.v18i3.7935SAI method for solving job shop sequencing problem under certain and uncertain environmenthttps://sljastats.sljol.info/article/10.4038/sljastats.v18i3.7911In this investigation, we use SAI method (Gupta et al. 2016), for solving sequencing problem when processing time of the machine is certain or uncertain in nature. The procedure adopted for solving the sequencing problems is easiest and involves the minimum numbers of iterations to obtain the sequence of jobs. The uncertainty in data is represented by triangular or trapezoidal fuzzy numbers. Yager’s ranking function approach is used to convert these fuzzy numbers into a crisp at a prescribed value of α. Stepwise SAI method is then used to obtain optimal job sequence for the problem. Further, the result obtained by SAI method is compared with Johnson’s Method. Numerical examples are given to demonstrate the effectiveness of the proposed approach. Published on 2017-12-31 00:00:00https://sljastats.sljol.info/article/10.4038/sljastats.v18i3.7911Modelling auto insurance claims in Singaporehttps://sljastats.sljol.info/article/10.4038/sljastats.v18i2.7957Claim frequency data in general insurance may not follow the traditional Poisson distribution when there are many zeros. When the number of observed zeros exceeds the number of expected zeros under the Poisson distribution, extra dispersion appears. This paper summarizes several dispersed and zero-inflated count data models, which are used to handle dispersion and excess zeros. We model the insurance claim count data with excess zeros with these models. We use chi-square goodness-of-fit, to test the validity of the assumption of the count data distribution and fit count data regression model with predictors. We compare the fits through AIC and BIC. The generalized Poisson model and Negative binomial model provide a good fit to the data. Published on 2017-12-26 00:00:00https://sljastats.sljol.info/article/10.4038/sljastats.v18i2.7957Developing a surrogate endpoint for AIDS clinical trialshttps://sljastats.sljol.info/article/10.4038/sljastats.v18i2.7955When it comes to the process of developing new treatments, the choice of an endpoint is very crucial because this endpoint will be used to assess the effects of the treatments. However the most sensitive and clinically relevant endpoint which is called the ‘true endpoint’ is difficult to use in a clinical trial because the measurement of the true endpoint can be costly and difficult to measure. In such cases the most feasible solution is to replace the true endpoint by another endpoint termed ‘surrogate endpoint’ which can be measured earlier and frequently.CD4 and viral loads are used in majority of AIDS clinical trials as surrogate endpoints, however, no surrogate endpoint has yet been shown to be suitable in forecasting the effectiveness of anti-HIV treatments. As a solution, the current study is intended on developing a surrogate endpoint for AIDS based on a combination of variables. This study consists of 16 variables measured in 1151 HIV infected patients. From descriptive statistics, variables CD4 cell count and Karnofsky score were identified as potential candidates for surrogate. However a model with a combination of variables named score consisting of CD4, Karnofsky score and age yielded positive results in the log rank test and conventional statistics. Validation of the scoring model using Prentice’s criteria fulfilled all four criteria of Prentice and the model was also successful in identifying the difference between the two treatments. When a comparison was made between CD4 cell count and the combined variable model as possible surrogate endpoints for AIDS, the combined variable model proved to be successful in almost every aspect. Also these results surpassed the results in past similar studies. Published on 2017-12-26 00:00:00https://sljastats.sljol.info/article/10.4038/sljastats.v18i2.7955Dirichlet model: its consistency and tracking pattern using buying behavior datahttps://sljastats.sljol.info/article/10.4038/sljastats.v18i2.7956Dirichlet model is a standard choice for modeling customers’ satisfaction and their preference which make the modeling of their buying behavior possible. In tracking consumers’ behavior pattern with respect to 7up Bottling Company Nig. Plc flavor, Dirichlet model provides theoretical benchmark for price promotions, advertisement, branding as well as been used to decide future purchasing pattern of consumers when it comes to the volume of Pepsi, Mirinda and 7up that they will purchase. This research was carried out based on consistency of Dirichlet model in tracking and forecasting purchasing pattern of customers. This was established through the evidence from variance-covariance matrix, probability vector and parameter vector, stochastic matrix and the probability of sale at the next proposed eight periods for the three flavors. The precision parameter and Bayes factor were used to test the goodness of fit of Dirichlet model so that the estimate of Dirichlet model can be relied upon. The distribution has been fitted to real life datasets for illustrating its practical behavior; based on the evidences from probability of sale at different proposed sales periods, Dirichlet model predicted that the probability of purchase is not stable for one product, but keeps varying from period to period. Only the Mirinda flavor has more stable sales throughout the proposed period with average0.569478 (56.9478%), while Pepsi and 7up have low purchasing probability with average 0.368518 (36.85%) and 0.361292(36.13%), respectively. Therefore, it can be relied on the Dirichlet model for predicting customers buying behavior pattern thereby enhancing efficient and consistent planning and decision making. Published on 2017-12-26 00:00:00https://sljastats.sljol.info/article/10.4038/sljastats.v18i2.7956Distribution of body mass index of Indian women: a study based on NFHS-2 and NFHS-3https://sljastats.sljol.info/article/10.4038/sljastats.v18i2.7958World Health Organization recommended Body Mass Index (BMI) as a measure of nutritional status of adults. This study investigates the distribution of BMI and its changes among Indian women in the age-group 15-49 years based on sampled data of 83,646 and 111,983 women from the National Family Health Survey-2 (NFHS-2) and NFHS-3 respectively. Background characteristics (BC) specific distributional changes in BMI are demonstrated by (i) fitting an appropriate probability distributions (ii) partial sum based on percentiles of distribution and (iii) test of equality of percentiles of distribution of BMI from NFHS-2 and NFHS-3. Relative measures R1 and R2 are defined to demonstrate burden of underweight and obesity. Changes in the prevalence of underweight, obesity and annual gain in mean BMI for synthetic cohort by age-groups are presented. Rapid increments are observed in mean and higher percentiles of BMI among married women, women in the higher age-groups, and women from south zone of India. High prevalence of underweight is observed among rural women (40.6%) and women from low standard of living (49.7%). Double burden of underweight and obesity is reported among older, rich and highly educated women and women from Christian and Sikh religions. Tests of equality of percentiles of BMI are rejected ( Published on 2017-12-26 00:00:00https://sljastats.sljol.info/article/10.4038/sljastats.v18i2.7958On mutual information for elliptical distributions: a case of nonlinear dependence of ‘n’ vectorshttps://sljastats.sljol.info/article/10.4038/sljastats.v18i1.7931In this paper, we modeled dependent categorical data via mutual information concept to obtain the measure of statistical dependence. We first derive the entropy and mutual information index for exponential power distribution. These concepts are important and were developed by Shannon in the context of information theory. Several literatures are already published in the case of the multivariate normal distribution. Then we extend these tools to the special case of a full symmetric of multivariate elliptical distributions. The upper bound for the entropy which is attained for the normal density is established. We further derived the nonlinear joint model for dependent random vectors that spans an elliptical vector space to enhance multivariate relationships among non-empty subsets of vectors via multivariate mutual information; based on the assumption that the subsets of each vector and their interactions can be represented in discrete form. To illustrate its application, the multivariate dependency among various sites based on dominance of some attributes were investigated. Published on 2017-08-31 00:00:00https://sljastats.sljol.info/article/10.4038/sljastats.v18i1.7931Structural breaks and unit root in macroeconomic time series: evidence from Nigeriahttps://sljastats.sljol.info/article/10.4038/sljastats.v18i1.7932The discourse on the properties of macroeconomic time series has received a considerable interest in recent literature. This is because the presence of unit in a realization of a stochastic process implies that shocks to the time series have a persistent effect with policy implications. Hence, this paper investigates the unit root properties of ten Nigerian macroeconomic time series using quarterly data from 1981-2015. For comparison, first we apply the conventional augmented Dickey-Fuller unit root test to examine the null of a unit root in the ten macroeconomic series, and we proceed to examine the unit root properties using the Lagrange Multiplier (LM) endogenous unit root tests that account for the presence of one-break and two-break as proposed by Lee and Strazicich (2003, 2013). On employing the augmented Dickey-Fuller test that does not account for structural breaks, our empirical results indicate that the unit root null hypothesis cannot be rejected for nine of the ten series considered in the study. However, on utilizing the Lagrange Multiplier (LM) endogenous one and two structural breaks test, we reject the unit root null in favour of the one and two-break stationary alternative for six of the ten series (60% rejection) considered in our study. These results imply that unit root tests that do not account sufficiently for the presence of structural breaks lead to misleading inference. These findings have important implications for the macroeconomic policy-making, modeling and forecasting of the Nigerian economy. We therefore, recommend that structural breaks should be taken into account in the econometric analysis of Nigerian macroeconomic variables. Published on 2017-08-31 00:00:00https://sljastats.sljol.info/article/10.4038/sljastats.v18i1.7932