Secondo fascicolo del 2003
CONTENTS
di Maurizio Carpita
pagine: 32
Scarica
Abstract ∨
In social and economics research it is common practice to provide an index to evaluate a particular aspect summarizing answers to a group of interview items measured on an ordinal scale. This paper takes into account the possibilities of the optimal scaling algorithm Princals ( principal component analysis by means of alternating least squares). This algorithm simultaneously gives both a measure of ordered answer categories and a weight referring to the importance to be attributed to each item. Princals is tested on a data set collected through a survey concerning the job quality of 2000 remunerated employees in the italian social services sector. The obtained results are compared to the results of the usual techniques of scaling.
Keywords: Optimal scaling, Non linear PCA, job satisfaction, Social services sector.
di Laura Pagani, Maria Chiara Zanarotti
pagine: 20
Scarica
Abstract ∨
The evaluation of service quality is as important as problematic matter. In fact, evaluation of the quality of a services involves all the difficulties one typically meets when try to measure an abstract concept: quality is an hidden, directly unobservable, meaning-needed aspect of a service. In this paper we consider a possible solution in efforts to measure quality, the use of Rasch Model (originally introduced in the psychometric field) in client satisfaction analysis. After a brief introduction to the Rasch model, this paper is focused on the effects of the measure scale used in collecting data on the results obtained. In particular, a data set in evaluating teachers’ effectiveness in University courses is used to make comparison between two measure scales that differ in number of (ordinal) categories (five to four categories).
Keywords: Customer satisfaction, ordinal scales, Rasch model.
di Paola Zuccolotto
pagine: 18
Scarica
Abstract ∨
Ultra-high frequency data are a recent challenge of statistics applied to financial markets. The main feature of this kind of data is to be unequally spaced in time, so that completely new models are necessary for their treatment. A successful proposal in this context is the class of ACD models, introduced by Engle and Russell in 1998 and since then deeply analyzed and widely applied to several datasets. One of the many extensions proposed is the d-ACD model (Zuccolotto, 2002), a specification able to take into account, when present, the relation between open volume and daily intensity of trading. Actually it is reasonable to argue that the influence of open volume on the intensity of trading is high when market opens and tends to decrease during the day. This feature is accounted for in this paper, where a refined version of the d-ACD model, called d-ACDλ , introduces a smoothing factor on open volumes. The performance of this new model is checked on three stocks of Italian financial market.
Keywords: Dati ad altissima frequenza, modelli ACD, volumi di apertura, modello ACD giornaliero.
di Francesca Greselin
pagine: 18
Scarica
Abstract ∨
The problem of finding the number of rectangular tables of non-negative integers with given row and column sums occurs in many interesting contexts, mainly in combinatorial problems (counting magic squares, enumerating permutation by descents, etc.) and in statistical applications (studying contingency tables with given margins, testing for independence, etc.). In the present paper a new recursive argument is presented to produce a general expression for the number of m×n tables with given margins. The result has the same expressive force of the one presented by Gail and Mantel (1977), but, remarkably, the counting approach suggests, quite naturally, also a recursive algorithm to explicitly generate the entire class of tables.
This work is a necessary step for studying a new measure of association, based on the relative position that a given table assumes in its class, endowed by an association ordering.
Keywords: Contingency tables, frequency tables, enumeration of contingency tables with given margins, counting contingency tables with given margins.
di Piero Quatto
pagine: 10
Scarica
Abstract ∨
In the Neyman-Pearson Theory the notion of least favorable mixture provides a reduction of a composite null hypothesis to a simple one that is equivalent for finding a most powerful test against a simple alternative.
The aim of this paper is to prove a property of mixtures from which follows Lehmann-Stein Theorem (1948) connecting least favorable mixtures with most powerful tests.
As application classical χ2 test for the variance of a Normal distribution with unknown mean is considered.
Keywords: Most Powerful Test, Neyman-Pearson Theory, Least Favorable Mixture.
|
|