IndexNumR is a package for computing indices of aggregate prices or quantities using information on the prices and quantities on multiple products over multiple time periods. Such numbers are routinely computed by statistical agencies to measure, for example, the change in the general level of prices, production inputs and productivity for an economy. Well known examples are consumer price indices and producer price indices.
In recent years, advances have been made in index number theory to address biases in many well known and widely used index number methods. One area of development has been the adaptation of multilateral methods, commonly used in cross-sectional comparisons, to the time series context. This typically involves more computational complexity than bilateral methods. IndexNumR provides functions that make it easy to estimate indices using common index number methods, as well as multilateral methods.
This first section covers the inputs into the main index number functions and how the data are to be organised to use these functions.
The index number functions such as priceIndex
,
quantityIndex
and GEKSIndex
all take a
dataframe as their first argument. This dataframe should contain
everything needed to compute the index. In general this includes columns
for,
One exception to the above is when elementary indexes are estimated
using the priceIndex
function. A quantity variable is not
required in this case because the index is unweighted, and in many cases
quantities may not be available (for example, when statistical agencies
collect sample prices on individual products).
The dataframe must have column names, since character strings are
used in other arguments to the index number functions to specify which
columns contain the data listed above. Column names can be set with the
colnames
function of base R. The sample dataset CES_sigma_2
is an example of the minimum dataframe required to compute an index.
## time prices quantities prodID
## 1 1 2.00 0.3846154 1
## 2 2 1.75 0.5846626 1
## 3 3 1.60 0.7135502 1
## 4 4 1.50 0.9149417 1
## 5 5 1.45 1.0280574 1
## 6 6 1.40 1.2058234 1
In this case, the dataframe is sorted by the product identifier prodID, but it need not be sorted at all.
To be able to compute indices, the data need to be subset in order to extract all observations on products for given periods. The approach used in IndexNumR is to require a time period variable as an input into many of its functions that will be used for subsetting. This time period variable must satisfy the following,
The variable may, and in fact likely will, have many observations for a given time period, since there are generally multiple items with price and quantity information. For example, the CES_sigma_2 dataset has observations on 4 products for each time period. We can see this by observing the first few rows of the dataset sorted by the time period.
## time prices quantities prodID
## 1 1 2.00 0.3846154 1
## 13 1 1.00 1.5384615 2
## 25 1 1.00 1.5384615 3
## 37 1 0.50 12.3076923 4
## 2 2 1.75 0.5846626 1
## 14 2 0.50 7.1621164 2
The user can provide their own time variable, or if a date variable
is available, IndexNumR has four functions that can compute the
required time variable: yearIndex
,
quarterIndex
, monthIndex
and
weekIndex
. Users should be aware that if there are a very
large number of observations then these functions can take some time to
compute, but once it has been computed it is easier and faster to work
with than dates.
A related issue is that of aggregating data collected at some higher
frequency, to a lower frequency. When computing index numbers, this is
often done by computing a unit value as follows, \[\begin{equation}
UV_{t} =
\frac{\sum_{i=1}^{N}p^{t}_{n}q^{t}_{n}}{\sum_{i=1}^{N}q^{t}_{n}}
\end{equation}\] That is, sum up total expenditure on each item
over the required period, and divide by the total quantity. Provided
that a time period variable as described above is available, the unit
values can be computed using the function unitValues
. This
function returns the unit values, along with the aggregate quantities
for each time period and each product. The output will also include the
product identifier and time period variable so the output dataframe from
the unitvalues
function contains everything needed to
compute an index number.
IndexNumR provides a sample dataset,
CES_sigma_2
that contains prices and quantities on four
products over twelve time periods that are consistent with consumers
displaying CES preferences with an elasticity of substitution equal to
two. This dataset is calculated using the method described in (W. Erwin Diewert and Fox 2017). We start with
prices for each of \(n\) products in
each of \(T\) time periods, an
n-dimensional vector of preference parameters \(\alpha\), and a T-dimensional vector of
total expenditures. Then calculate the expenditure shares for each
product in each time period using,
\[\begin{equation} s_{tn} = \frac{\alpha_{n}p_{tn}^{1-\sigma}}{\sum_{n=1}^{N}\alpha_{n}p_{tn}^{1-\sigma}} \end{equation}\] and use those shares to calculate the quantities,
\[\begin{equation} q_{tn} = \frac{e_{t}s_{tn}}{p_{tn}} \end{equation}\]
IndexNumR provides the function CESData
to
produce datasets assuming CES preferences as above for any elasticity of
substitution \(\sigma\), using the
prices, \(\alpha\), and expenditure
values assumed in (W. Erwin Diewert and Fox
2017). The vector \(\alpha\)
is,
\[\begin{equation} \alpha = \begin{bmatrix} 0.2 & 0.2 & 0.2 & 0.4 \end {bmatrix} \end{equation}\]
and the prices and expenditures are,
t | p1 | p2 | p3 | p4 | e |
---|---|---|---|---|---|
1 | 2.00 | 1.00 | 1.00 | 0.50 | 10 |
2 | 1.75 | 0.50 | 0.95 | 0.55 | 13 |
3 | 1.60 | 1.05 | 0.90 | 0.60 | 11 |
4 | 1.50 | 1.10 | 0.85 | 0.65 | 12 |
5 | 1.45 | 1.12 | 0.40 | 0.70 | 15 |
6 | 1.40 | 1.15 | 0.80 | 0.75 | 13 |
7 | 1.35 | 1.18 | 0.75 | 0.70 | 14 |
8 | 1.30 | 0.60 | 0.72 | 0.65 | 17 |
9 | 1.25 | 1.20 | 0.70 | 0.70 | 15 |
10 | 1.20 | 1.25 | 0.40 | 0.75 | 18 |
11 | 1.15 | 1.28 | 0.70 | 0.75 | 16 |
12 | 1.10 | 1.30 | 0.65 | 0.80 | 17 |
A common issue when computing index numbers is that the sample of
products over which the index is computed changes over time. Since price
and quantity information is generally needed on the same set of products
for each pair of periods being compared, the index calculation functions
provided in IndexNumR provide the option
sample="matched"
to use only a matched sample of products.
How this performs the matching depends on whether the index is bilateral
or multilateral. For bilateral indices the price and quantity
information will be extracted for a pair of periods, any non-overlapping
products removed, and the index computed over these matched products.
This is repeated for each pair of periods over which the index is being
computed.
For multilateral indexes it is somewhat different. For the GEKS index, the matching is performed for each bilateral comparison that enters into the calculation of the multilateral index (see section on the GEKS index below). For the Geary-Khamis and Weighted-Time-Product-Dummy methods, matching can be performed over each window of data. That is, only products that appear in all time periods within each calculation window are kept. For these two indexes a matched sample is not required; by default, IndexNumR will set price and quantity to zero for all missing observations, to allow the index to be computed. For the WTPD index, this can be shown to give the same result as running a weighted least squares regression on the available pooled data.
Matched-sample indexes may suffer from bias. As a simple assessment
of the potential bias, the function evaluateMatched
calculates the proportion of total expenditure that the matched sample
covers in each time period. The function provides output for expenditure
as well as counts and can evaluate overlap using either a chained or
fixed base index.
The first four columns of the output presents the base period information base_index (the time index of the base period), base (total base period expenditure or count), base_matched (the expenditure or count of the base period for matched products), base_share (share of total expenditure in the base period that remains after matching). Columns 5-8 report the same information for the current period. Columns 4 and 8 can be expressed as, \[\begin{equation} \lambda_{t} = \frac{\sum_{I\in I(1)\cap I(0)}p_{n}^{t}q_{n}^{t}}{\sum_{I\in I(t)}p_{n}^{t}q_{n}^{t}} \quad \text{for } t \in \{1,0\}, \end{equation}\] where \(I(t)\) is the set of products available in period \(t\), \(t=1\) refers to the current period as is used to compute column 8 and \(t=0\) refers to the comparison period, which is used to compute column 4.
The count matrix has two additional columns, “new” and “leaving”. The new column gives the number of products that exist in the current period but not the base period (products entering the sample). The leaving column gives the count of products that exist in the base period but not the current period (products leaving the sample). Matching removes both of these types of products.
An alternative to using a matched sample of products is to impute the
missing data. One technique for doing this is to replace missing values
with the last actual price observation. If the data has both prices and
quantities then the corresponding quantities are set to zero. If the
missing observations occur at the beginning of the time series then the
first actual observation is carried backward to the first time period.
IndexNumR performs this carry price imputation with the
imputeCarryPrices
function; however, this is only needed if
the imputed data themselves are of interest. Otherwise, the price index
functions can use carry price imputation by setting the parameter
imputePrices = "carry"
.
In the example below, the first two observations on product 1 are missing, so the price from the third period is carried backwards to fill the missing observations. Observations 3 and 4 are missing on product 2, so the price in period 2 is carried forward to fill them. The corresponding quantities are set to zero.
# create a dataset with some missing observations on product 1 and 2
df <- CES_sigma_2[-c(1,2,15,16),]
df <- df[df$prodID %in% 1:2 & df$time <= 6,]
dfMissing <- df[, c("time", "prices", "prodID")] %>%
tidyr::pivot_wider(id_cols = time, names_from = prodID, values_from = prices)
dfMissing[order(dfMissing$time),]
## # A tibble: 6 × 3
## time `1` `2`
## <int> <dbl> <dbl>
## 1 1 NA 1
## 2 2 NA 0.5
## 3 3 1.6 NA
## 4 4 1.5 NA
## 5 5 1.45 1.12
## 6 6 1.4 1.15
# compute carry prices
carryPrices <- imputeCarryPrices(df, pvar = "prices", qvar = "quantities",
pervar = "time", prodID = "prodID")
# print the data with the product prices in columns to see the filled data
carryPrices[, c("time", "prices", "prodID")] %>%
tidyr::pivot_wider(id_cols = time, names_from = prodID, values_from = prices)
## # A tibble: 6 × 3
## time `1` `2`
## <dbl> <dbl> <dbl>
## 1 1 1.6 1
## 2 2 1.6 0.5
## 3 3 1.6 0.5
## 4 4 1.5 0.5
## 5 5 1.45 1.12
## 6 6 1.4 1.15
# print the data with the product quantities in columns to see the corresponding zeros
carryPrices[, c("time", "quantities", "prodID")] %>%
tidyr::pivot_wider(id_cols = time, names_from = prodID, values_from = quantities)
## # A tibble: 6 × 3
## time `1` `2`
## <dbl> <dbl> <dbl>
## 1 1 0 1.54
## 2 2 0 7.16
## 3 3 0.714 0
## 4 4 0.915 0
## 5 5 1.03 1.72
## 6 6 1.21 1.79
Bilateral index numbers are those that examine the movement between two periods. All of the bilateral index numbers can be computed as period-on-period, chained or fixed base. Period-on-period simply measures the change from one period to the next. Chained indices give the cumulative change, and it is calculated as the cumulative product of the period-on-period index. The fixed base index compares each period to the base period. This is also called a direct index, because unlike a chained index, it does not go through other periods to measure the change since the base period. Formulae used to compute the bilateral index numbers from period t-1 to period t are as below.
Carli index (Carli 1804), \[\begin{equation*} P(p^{t-1},p^{t}) = \frac{1}{N}\sum_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right) \end{equation*}\]
Jevons index (Jevons 1865), \[\begin{equation*} P(p^{t-1},p^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{(1/N)} \end{equation*}\]
Dutot index (Dutot 1738), \[\begin{equation*} P(p^{t-1},p^{t}) = \frac{\sum_{n=1}^{N}p^{t}_{n}}{\sum_{n=1}^{N}p^{t-1}_{n}} \end{equation*}\]
Laspeyres index (Laspeyres 1871), \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1}) = \frac{\sum_{n=1}^{N}p^{t}_{n}q^{t-1}_{n}}{\sum_{n=1}^{N}p^{t-1}_{n}q^{t-1}_{n}} \end{equation*}\]
Paasche index (Paasche 1874) \[\begin{equation*} P(p^{t-1},p^{t},q^{t}) = \frac{\sum_{n=1}^{N}p^{t}_{n}q^{t}_{n}}{\sum_{n=1}^{N}p^{t-1}_{n}q^{t}_{n}} \end{equation*}\]
Geometric Laspeyres index (Konüs and Byushgens 1926) \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1}) = \prod_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{s^{t-1}_{n}}, \end{equation*}\] where \(s^{t}_{n} = \frac{p^{t}_{n}q^{t}_{n}}{\sum_{n=1}^{N}p^{t}_{n}q^{t}_{n}}\) is the share of period \(t\) expenditure on good \(n\).
Geometric Paasche index (Konüs and Byushgens 1926) \[\begin{equation*} P(p^{t-1},p^{t},q^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{s^{t}_{n}}, \end{equation*}\] where \(s^{t}_{n}\) is defined as above for the geometric laspeyres index.
Lowe index (Lowe 1823) \[\begin{equation*} P(p^{t-1},p^{t},q^{b}) = \frac{\sum_{n=1}^{N}p^{t}_{n}q^{b}_{n}}{\sum_{n=1}^{N}p^{t-1}_{n}q^{b}_{n}}, \end{equation*}\] where \(b\) can be any period, or range of periods, in the dataset.
Young index (Young 1812) \[\begin{equation*} P(p^{t-1},p^{t},p^{b},q^{b}) = \sum_{n=1}^{N}s^{b}_{n}\frac{p^{t}_{n}}{p^{t-1}_{n}}, \end{equation*}\] where \(b\) can be any period, or range of periods, in the dataset.
Drobish index (Drobish 1871) \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = (P_{L}+P_{P})/2, \end{equation*}\] where \(P_{L}\) is the Laspeyres price index and \(P_{P}\) is the Paasche price index.
Marshall-Edgeworth index (Marshall 1887), (Edgeworth 1925) \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \frac{\sum_{n=1}^{N}p_{n}^{t}(q_{n}^{t-1}+q_{n}^{t})}{\sum_{n=1}^{N}p_{n}^{t-1}(q_{n}^{t-1}+q_{n}^{t})} \end{equation*}\]
Palgrave index (Palgrave 1886) \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \sum_{n=1}^{N}s^{t}_{n}\frac{p^{t}_{n}}{p^{t-1}_{n}}, \end{equation*}\] where \(s^{t}_{n}\) is defined as above for the geometric laspeyres index.
Fisher index (Fisher 1921), \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = [P_{P}P_{L}]^{\frac{1}{2}}, \end{equation*}\] where \(P_{P}\) is the Paasche index and \(P_{L}\) is the Laspeyres index. The Fisher index has other representations, but this is the one used by IndexNumR in its computations.
Tornqvist index (Törnqvist 1936; Törnqvist and Törnqvist 1937), \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{\left(s^{t-1}_{n}+s^{t}_{n}\right)/2}, \end{equation*}\] where \(s^{t}_{n}\) is defined as above for the geometric laspeyres index.
Walsh index, \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \frac{\sum_{n=1}^{N}\sqrt{q^{t-1}_{n}q^{t}_{n}}\cdot p^{t}_{n}}{\sum_{n=1}^{N}\sqrt{q^{t-1}_{n}q^{t}_{n}}\cdot p^{t-1}_{n}} \end{equation*}\]
Sato-Vartia index (Sato 1976; Vartia 1976), \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{w_{n}} \end{equation*}\] where the weights are normalised to sum to one, \[\begin{equation*} w_{n} = \frac{w^{*}_{n}}{\sum_{n=1}^{N}w^{*}_{n}} \end{equation*}\] and \(w^{*}_{n}\) is the logarithmic mean of the shares, \[\begin{equation*} w^{*}_{n} = \frac{s^{t}_{n}-s^{t-1}_{n}}{\log (s^{t}_{n}) - \log (s^{t-1}_{n})} \end{equation*}\]
Geary-Khamis (Khamis 1972) \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \frac{\sum_{n=1}^{N}h(q^{t-1}_{n}, q^{t}_{n})p^{t}_{n}}{\sum_{n=1}^{N}h(q^{t-1}_{n}, q^{t}_{n})p^{t-1}_{n}} \end{equation*}\] where h() is the harmonic mean.
Stuvel index (Stuvel 1957) \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = A + \sqrt{A^2 + V^{t}/V^{t-1}}, \end{equation*}\] where \(V^{t}\) is value of total sales in period \(t\), \(A = (P_{L}-Q_{L})/2\), \(P_{L}\) is the laspeyres price index and \(Q_{L}\) is the laspeyres quantity index.
CES index, also known as the Lloyd-Moulton index (Lloyd 1975; Moulton 1996), \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1}) = \left[\sum_{n=1}^{N}s_{n}^{t-1}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{(1-\sigma)}\right]^{\left(\frac{1}{1-\sigma}\right)}, \end{equation*}\] where \(\sigma\) is the elasticity of substitution.
This is a regression model approach where log prices are modelled as a function of time and product dummies. The regression equation is given by,
\[\begin{equation*} \ln{p_{n}^{t}} = \alpha + \beta_{1} D^{t} + \sum_{n = 2}^{N}\beta_{n}D_{n} + \epsilon_{n}^{t}, \end{equation*}\] where \(D^{t}\) is equal to 1 in period \(t\) and 0 in period \(t-1\), and \(D_{n}\) is equal to 1 if the product is product \(n\) and 0 otherwise.
The price index is then given by, \[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \exp({\hat{\beta_{1}}}) \end{equation*}\]
However, this is a biased estimate (Kennedy 1981), so IndexNumR optionally calculates the following adjusted estimate,
\[\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \exp({\hat{\beta_{1}} - 0.5 \times Var(\hat{\beta_{1}})}) \end{equation*}\]
The time-product-dummy equation can be estimated using three methods
in IndexNumR using the weights
parameter: ordinary least
squares; weighted least squares where the weights are the product
expenditure shares; or weighted least squares where the weights are the
average of the expenditure shares in the two periods. In the first case,
the index produced is the same as the matched sample Jevons index, which
does not use quantity information. The second option produces a matched
sample harmonic share weights index, and the last option produces the
matched sample Tornqvist index. See (Walter E.
Diewert 2005b) for a discussion of these results.
To estimate a simple chained Laspeyres price index,
priceIndex(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
indexMethod = "laspeyres",
output = "chained")
## [,1]
## [1,] 1.0000000
## [2,] 0.9673077
## [3,] 1.2905504
## [4,] 1.3382002
## [5,] 1.2482444
## [6,] 1.7346552
## [7,] 1.6530619
## [8,] 1.4524186
## [9,] 1.8386215
## [10,] 1.7126802
## [11,] 2.1810170
## [12,] 2.2000474
Estimating multiple different index numbers on the same data is straight-forward,
methods <- c("laspeyres","paasche","fisher","tornqvist")
prices <- lapply(methods,
function(x) {priceIndex(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
indexMethod = x,
output = "chained")})
as.data.frame(prices, col.names = methods)
## laspeyres paasche fisher tornqvist
## 1 1.0000000 1.0000000 1.0000000 1.0000000
## 2 0.9673077 0.8007632 0.8801048 0.8925715
## 3 1.2905504 0.8987146 1.0769571 1.0789612
## 4 1.3382002 0.9247902 1.1124543 1.1146080
## 5 1.2482444 0.6715974 0.9155969 0.9327861
## 6 1.7346552 0.7858912 1.1675831 1.1790710
## 7 1.6530619 0.7472454 1.1114148 1.1223220
## 8 1.4524186 0.5836022 0.9206708 0.9379711
## 9 1.8386215 0.6431381 1.0874224 1.0961295
## 10 1.7126802 0.5145138 0.9387213 0.9527309
## 11 2.1810170 0.5736947 1.1185875 1.1288419
## 12 2.2000474 0.5745408 1.1242851 1.1346166
This illustrates the Laspeyres index’s substantial positive bias, the Paasche index’s substantial negative bias, and the similar estimates produced by the Fisher and Tornqvist superlative index numbers.
The CES index number method requires an elasticity of substitution
parameter in order to be calculated. IndexNumR provides a
function elasticity
to estimate the elasticity of
substitution parameter, following the method of (Balk 2000). The basic method is to solve for
the value of the elasticity of substitution that equates the CES index
to a comparison index. One comparison index noted by Balk is the
‘current period’ CES index, \[\begin{equation}
\left[\sum_{n=1}^{N}s_{n}^{t}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{-(1-\sigma)}\right]^{\left(\frac{-1}{1-\sigma}\right)}.
\end{equation}\] Therefore, we numerically calculate the value of
\(\sigma\) that solves, \[\begin{equation}
\left[\sum_{n=1}^{N}s_{n}^{t-1}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{(1-\sigma)}\right]^{\left(\frac{1}{1-\sigma}\right)}
-
\left[\sum_{n=1}^{N}s_{n}^{t}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{-(1-\sigma)}\right]^{\left(\frac{-1}{1-\sigma}\right)}
= 0.
\end{equation}\]
This is done using the uniroot
function of the
stats package distributed with base R. Note that this equation
can be used to solve for sigma for any \(t=2,\cdots,T\), so there are \(T-1\) potential estimates of sigma. The
elasticity
function will return all \(T-1\) estimates as well as the arithmetic
mean of the estimates. In addition to the current period CES index, Balk
also notes that the Sato-Vartia index can be used, while (Ivancic, Diewert, and Fox 2010) note that a
Fisher index could be used. Any of these three indexes can be used as
the comparison index by specifying the compIndex
option as
either "fisher"
, "ces"
or
"satovartia"
. The current period CES index is the
default.
The dataset available with IndexNumR, CES_sigma_2, was
calculated assuming a CES cost function with an elasticity of
substitution equal to 2. Running the elasticity
function on
this dataset,
elasticity(CES_sigma_2,
pvar="prices",
qvar="quantities",
pervar="time",
prodID="prodID",
compIndex="ces")
## $sigma
## [1] 2
##
## $allsigma
## [,1]
## [1,] 2.000000
## [2,] 2.000001
## [3,] 2.000000
## [4,] 1.999999
## [5,] 2.000000
## [6,] 2.000000
## [7,] 2.000000
## [8,] 2.000000
## [9,] 2.000000
## [10,] 2.000000
## [11,] 2.000000
##
## $diff
## [,1]
## [1,] -5.418676e-09
## [2,] -5.665104e-08
## [3,] 3.426148e-13
## [4,] 1.213978e-07
## [5,] 2.196501e-10
## [6,] -1.141232e-11
## [7,] 3.118616e-13
## [8,] 9.429124e-12
## [9,] -7.997090e-09
## [10,] 4.536105e-11
## [11,] 5.087042e-13
which recovers the value of \(\sigma\) used to construct the dataset.
There is one additional item of output labelled ‘diff’. This is the
value of the difference between the CES index and the comparison index
and is returned so that the user can check that the value of this
difference is indeed zero. If it is non-zero then it may indicate that
uniroot
was not able to find a solution, within the
specified upper and lower bounds for \(\sigma\). These bounds can be changed with
the options upper
and lower
of the
elasticity
function. The defaults are 20 and -20
respectively.
One problem with chain-linked indices is the potential for chain drift. Take an example where prices increase in one period and then return to their original level in the next period. An index suffering from chain-drift will increase when prices increase, but won’t return to its original level when prices do. In the above examples, it was noted that there is substantial positive bias in the Laspeyres index and substantial negative bias in the Paasche index. Part of this is due to chain drift.
One way of reducing the amount of chain drift is to choose linking periods that are ‘similar’ in some sense (alternatively, use a multilateral method). This method of linking has been mentioned by Diewert and Fox (W. Erwin Diewert and Fox 2017), and Hill (Hill 2001) takes the concept further to choose the link period based on a minimum cost spanning tree.
To choose the linking period we need a measure of the similarity between two periods. For each period we have information on prices and quantities. The Hill (2001) method compares the two periods based on the Paasche-Laspeyres spread,
\[\begin{equation} PL (p^{t},p^{T+1},q^{t},q^{T+1}) = \Bigg|{ln\Bigg(\frac{P_{T+1,t}^{L}}{P_{T+1,t}^{P}}\Bigg)}\Bigg|, \end{equation}\]
where \(P^{L}\) is a Laspeyres price index and \(P^{P}\) is a Paasche price index. Since the Laspeyres and Paasche indices are biased in opposite directions, this choice of similarity measure is designed to choose linking periods that minimise the influence of index number method choice.
Alternative measures exist that compute the dissimilarity of two vectors. Two such measures, recommended by Diewert (Walter E. Diewert 2002) are the weighted log-quadratic index of relative price dissimilarity and the weighted asymptotically linear index of relative price dissimilarity, given by the following, \[\begin{align} LQ(p^{t},p^{T+1},q^{t},q^{T+1}) = \sum_{n=1}^{N}\frac{1}{2}&(s_{T+1,n} + s_{t,n})[ln(p_{T+1,n}/P(p^{t},p^{T+1},q^{t},q^{T+1})p_{t,n})]^{2} \label{eq:logQuadratic} \\ AL(p^{t},p^{T+1},q^{t},q^{T+1}) = \sum_{n=1}^{N}\frac{1}{2}&(s_{T+1,n} + s_{t,n})[(p_{T+1,n}/P(p^{t},p^{T+1},q^{t},q^{T+1})p_{t,n}) + \nonumber \\ & (P(p^{t},p^{T+1},q^{t},q^{T+1})p_{t,n}/p_{T+1,n}) - 2] \end{align}\] where \(P(p^{t},p^{T+1},q^{t},q^{T+1})\) is a superlative index number.
Another measure proposed by Fox, Hill and Diewert (Fox, Hill, and Diewert 2004) is a measure of absolute dissimilarity given by,
\[\begin{equation}
AD(x_{j},x_{k}) =
\frac{1}{M+N}\sum_{l=1}^{M+N}\Bigg[ln\Bigg(\frac{x_{kl}}{x_{jl}}\Bigg) -
\frac{1}{M+N}\sum_{i=1}^{M+N}ln\Bigg(\frac{x_{ki}}{x_{ji}}\Bigg)\Bigg]^{2}
+
\Bigg[\frac{1}{M+N}\sum_{i=1}^{M+N}ln\Bigg(\frac{x_{ki}}{x_{ji}}\Bigg)\Bigg]^{2},
\end{equation}\]
where \(M+N\) is the total number of
items in the vector and \(x_{j}\) and
\(x_{k}\) are the two vectors being
compared. The authors use this in the context of detecting outliers, but
it can be used to compare the price and quantity vectors of two time
periods. One way to do this is to only use price information, or only
use quantity information. There are two ways to use both price and
quantity information: stack the price and quantity vectors for each time
period into a single vector and compare the two `stacked’ vectors; or
calculate separate measures of absolute dissimilarity for prices and
quantities before combining these into a single measure. The former
method is simple to implement, but augments the price vector with a
quantity vector that may be of considerably different magnitude and
variance. Another option is to compute the absolute dissimilarity using
prices and quantities separately, then combine them by taking the
geometric average.
The final measure is the predicted share measure of relative price dissimilarity employed by Diewert, Finkel, Sayag and White in the Seasonal Products chapter of the Update of the Consumer Price Index Manual, Consumer Price Index Theory (draft available here). To introduce this measure, first we define some notation.
The share of expenditure on product \(n\) in period \(t\) is given by \(s_{t,n} = p_{t,n}q_{t,n}/ \sum_{i=1}^{K}p_{t,i}q_{t,i}\). The ‘predicted’ share of expenditure on product \(n\) in period \(t\), using the quantities of period \(t\) and the prices of period \(r\) is given by \(s_{r,t,n} = p_{r,n}q_{t,n}/ \sum_{i=1}^{K}p_{r,i}q_{t,i}\). We also define the predicted share error \(e_{r,t,n}\) as the actual share, minus the predicted share \(s_{t,n} - s_{r,t,n}\).
The predicted share measure of relative price dissimilarity between the periods \(t\) and \(r\) is given by: \[\begin{equation} PS_{r,t} = \sum_{n=1}^{N} (e_{r,t,n})^2 + \sum_{n=1}^{N} (e_{t,r,n})^2 \end{equation}\]
When the dataset being used does not have quantities, and an
elementary index is being constructed, we cannot compute the shares in
the above formulas. In this case, quantities are imputed in such a way
that the expenditure shares on each product available in a time period
are equal. The quantities are constructed by setting quantity equal to
\(1/P_{n,t}\times N_{t}\) where \(N_{t}\) is the number of products available
in period \(t\). IndexNumR
does this with the function imputeQuantities
; however price
indexes can be estimated without calling this function directly. Calling
priceIndex
and setting qvar = ""
will trigger
IndexNumR to impute the quantities used in the estimation of the
predicted share relative price dissimilarity measure.
IndexNumR provides three functions, enabling the estimation
of the dissimilarity measures above. The first function
relativeDissimilarity
calculates the Paasche-Laspeyres
spread, log-quadratic, asymptotically linear and predicted share
relative dissimilarity measures, and the second function
mixScaleDissimilarity
computes the mix, scale and absolute
measures of dissimilarity. All three functions provide the same output -
a data frame with three columns containing the indices of the pairs of
periods being compared in the first two columns and the value of the
dissimilarity measure in the third column.
Once these have been computed, the function
maximiumSimilarityLinks
can take the output data frame from
these functions and compute the maximum similarity linking periods. The
function priceIndex
effectively computes a
similarity-linked index as follows,
Using the log-quadratic measure of relative dissimilarity, the
dissimilarity between the periods in the CES_sigma_2
dataset is as follows,
lq <- relativeDissimilarity(CES_sigma_2,
pvar="prices",
qvar="quantities",
pervar = "time",
prodID = "prodID",
indexMethod = "fisher",
similarityMethod = "logquadratic")
head(lq)
## period_i period_j dissimilarity
## 1 1 2 0.09726451
## 2 1 3 0.02037395
## 3 1 4 0.04164311
## 4 1 5 0.28078294
## 5 1 6 0.08880177
## 6 1 7 0.08531212
The output from estimating the dissimilarity between periods can than be used to estimate the maximum similarity links,
## xt x0 dissimilarity
## 1 1 1 0.000000000
## 2 2 1 0.097264508
## 3 3 1 0.020373951
## 4 4 3 0.003832972
## 5 5 4 0.130990853
## 6 6 4 0.008684012
## 7 7 6 0.001122913
## 8 8 2 0.041022738
## 9 9 7 0.001367896
## 10 10 5 0.006962106
## 11 11 9 0.002946674
## 12 12 11 0.003612044
To estimate a chained Laspeyres index linking together the periods with maximum similarity as estimated above,
priceIndex(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
indexMethod = "laspeyres",
output = "chained",
chainMethod = "logquadratic")
## [,1]
## [1,] 1.0000000
## [2,] 0.9673077
## [3,] 1.1000000
## [4,] 1.1406143
## [5,] 1.0639405
## [6,] 1.2190887
## [7,] 1.1617463
## [8,] 1.0551558
## [9,] 1.1357327
## [10,] 1.0928877
## [11,] 1.1732711
## [12,] 1.1835084
Multilateral index number methods use data from multiple periods to
compute each term in the index. IndexNumR provides the
functions GEKSIndex
, GKIndex
and
WTPDIndex
to use the GEKS, Geary-Khamis or Weighted
Time-Product-Dummy multilateral index number methods respectively.
The GEKS method is attributable to Gini (Gini 1931), Eltito and Koves (Eltetö and Köves 1964), and Szulc (Szulc 1964) in the cross-sectional context. The idea of adapting the method to the time series context is due to Balk (Balk 1981), and developed further by Ivancic, Diewert and Fox (Ivancic, Diewert, and Fox 2011).
The user must choose the size of the window over which to apply the
GEKS method, typically one or two years of data plus one period to
account for seasonality. Denote this as \(w\).The basic method followed by the
function GEKSIndex
is as follows. Choose a period, denoted
period \(k\), within the window as the
base period. Calculate a bilateral index number between period \(k\) and every other period in the window.
Repeat this for all possible choices of \(k\). This gives a matrix of size \(w\times w\) of bilateral indexes between
all possible pairs of periods within the window. Then compute the GEKS
indexes for the first \(w\) periods as,
\[\begin{equation}
\left[ \prod_{k=1}^{w}P^{k,1} \right]^{1/w}, \left[
\prod_{k=1}^{w}P^{k,2} \right]^{1/w}, \cdots, \left[
\prod_{k=1}^{w}P^{k,w} \right]^{1/w},
\end{equation}\] where the term \(P^{k,t}\) is the bilateral index between
period \(t\) and base period \(k\). IndexNumR offers the Fisher,
Tornqvist, Walsh, Jevons and time-product-dummy index number methods for
the index \(P\) via the
indexMethod
option. The Tornqvist index method is the
default. The \(w\times w\) matrix of
bilateral indexes is as follows, \[P =
\begin{pmatrix}
P^{1,1} & \cdots & P^{1,w} \\
\vdots & \ddots & \vdots \\
P^{w,1} & \cdots & P^{w,w}
\end{pmatrix}
\] So that the first term of the GEKS index is the geometric mean
of the elements in the first column of the above matrix, the second term
is the geometric mean of the second column, and so on. Note that
IndexNumR makes use of two facts about the matrix above to
speed up computation: it is (inversely) symmetric so that \(P^{j,k} = 1/P^{k,j}\); and the diagonal
elements are 1.
The intersection GEKS (int-GEKS) was developed by Claude Lamboray and Frances Krsinich (C. Lamboray and Krsinich 2015) to deal with the asymmetry with which products enter the index, when there are appearing or disappearing products. The issue arises because when calculating the GEKS index comparing two adjacent periods, products that are not matched for the two periods may still contribute to the index, either in period \(t-1\) or period \(t\), but not both.
To see this, note that the GEKS index between periods \(t-1\) and \(t\), using the window going from period 1 to \(w\), can be written as: \[\begin{equation} P^{t-1,t}_{[1:w]} = \prod_{k=1}^{w}(P_{t-1,k}\times P_{k,t})^{1/w}, \end{equation}\]
where \(P_{t-1,k}\) is the bilateral
price index between periods \(t-1\) and
\(k\), and \(P_{k,t}\) is similarly defined. The usual
GEKS procedure performed by IndexNumR when using the function
GEKSIndex
and specifying sample = matched
,
performs matching between period \(t\)
and period \(k\) only. The int-GEKS
method performs matching between periods \(t-1\), \(t\) and \(k\). Since more matching is performed,
fewer data points are used in estimating the index, particularly if
product turnover is high. It is also computationally somewhat slower, as
more matching must be performed.
The Geary-Khamis, or GK method, was introduced by Geary (Geary 1958) and extended by Khamis (Khamis 1970, 1972). This method involves calculating a set of quality adjustment factors, \(b_{n}\), simultaneously with the price levels, \(P_{t}\). The two equations that determine both of these are: \[\begin{equation} b_{n} = \sum_{t=1}^{T}\left[\frac{q_{tn}}{q_{n}}\right]\left[\frac{p_{tn}}{P_{t}}\right] \end{equation}\]
\[\begin{equation} P_{t} = \frac{p^{t} \cdot q^{t}} {b \cdot q^{t}} \end{equation}\]
These equations can be solved by an iterative method, where a set of
\(b_{n}\) are arbitrarily chosen, which
can then be used to calculate an initial vector of price levels. This
vector of prices is then used to generate a new \(b\) vector, and so on until the changes
become smaller than some threshold. IndexNumR can use the
iterative method by specifying the parameter
solveMethod = "iterative"
. However, there is an alternative
method using matrix algebra that is significantly more efficient. To use
the more efficient method discussed below, specify
solveMethod = "inverse"
.
As discussed in (W. Erwin Diewert and Fox 2017) and following Diewert (Walter E. Diewert 1999), the problem of finding the \(b\) vector can be solved using the following system of equations:
\[\begin{equation} \left[I_{N} - C\right]b = 0_{N}, \end{equation}\] where \(I_{N}\) is the \(N \times N\) identity matrix, \(0_{N}\) is an n-dimensional vector of zeros and the \(C\) matrix is given by,
\[\begin{equation} C = \hat{q}^{-1} \sum_{t=1}^{T}s^{t}q^{t\textbf{T}}, \end{equation}\] where \(\hat{q}^{-1}\) is the inverse of an \(N \times N\) diagonal matrix \(q\), where the diagonal elements are the total quantities purchased for each good over all time periods, \(s^{t}\) is a vector of the expenditure shares for time period \(t\), and \(q^{t\textbf{T}}\) is the transpose of the vector of quantities purchased in time period \(t\). It can be shown that the matrix \([I-C]\) is singular so a normalisation is required to solve for \(b\). IndexNumR follows the method discussed by Irwin Collier Jr. in his comment on (Walter E. Diewert 1999) and assumes the following normalisation,
\[\begin{equation} \sum_{n=1}^{N}b_{n}q_{n} = 1, \end{equation}\] which is, in matrix form, \[\begin{equation} c = R\begin{bmatrix} b_{1}q_{1} \\ \vdots \\ b_{n}q_{n} \end{bmatrix}, \end{equation}\] where \(c\) is the \(N \times 1\) vector \(\begin{bmatrix} 1 & 0 & \dots & 0 \end{bmatrix}^{\textbf{T}}\),
and \(R\) is the \(N \times N\) matrix, \[\begin{equation} R = \begin{bmatrix} 1 & 1 & \dots & 1 \\ 0 & \dots & \dots & 0 \\ \vdots & & & \vdots \\ 0 & \dots & \dots & 0 \end{bmatrix} \end{equation}\]
Adding the constraint to the original equation we now have the solution for \(b\), \[\begin{equation} b = [I_{N} - C + R]^{-1}c. \end{equation}\]
Once the \(b\) vector has been calculated, the price levels can be computed from the GK equations above.
The weighed time-product-dummy method can be seen as the country-product-dummy method (Summers 1973) adapted to the time-series context and supposes the following model for prices:
\[\begin{equation} p_{tn} = \alpha_{t}b_{n}e_{tn}, \end{equation}\] where \(\alpha_{t}\) can be interpreted as the price level in period \(t\), \(b_{n}\) is the quality adjustment factor for product \(n\) and \(e_{tn}\) is a stochastic error term.
The problem is to solve for \(\alpha\) and \(b\) using least squares minimisation. Following (Rao 1995), it is formulated as a weighted least squares minimisation, where the weights are based on economic importance. Diewert and Fox show that this can be written as the solution to the system of equations,
\[\begin{equation} [I_{N} - F]\beta = f, \end{equation}\] where \(I_{N}\) is the \(N \times N\) identity matrix,
\(F\) is the following \(N \times N\) matrix, \[\begin{equation} F = \begin{bmatrix} f_{11} & \dots & f_{1N} \\ \vdots & \dots & \vdots \\ f_{N1} & \dots & f_{NN} \end{bmatrix}, \end{equation}\]
the elements of \(F\) are the following, \[\begin{equation} f_{nj} = w_{nj}/\sum_{k=1}^{N}w_{nk} \quad n,j = 1, \dots, N, \end{equation}\]
with the \(w_{nj}\) given by, \[\begin{equation} w_{nj} = \sum_{t=1}^{T}w_{tnj} \quad n,j = 1, \dots, N, \end{equation}\]
and the \(w_{tnj}\) given by, \[\begin{equation} w_{tnj} = s_{tn}s_{tj} \quad n \neq j, n = 1, \dots, N; j = 1, \dots, N; t = 1, \dots, T. \end{equation}\]
\(f\) on the right-hand-side is the following, \[\begin{equation} f = [f_{1}, \dots, f_{N}]^{\textbf{T}}, \end{equation}\]
where the \(f_{n}\) are given by, \[\begin{equation} \sum_{t=1}^{T}\sum_{j=1}^{N}f_{tnj}(y_{tn} - y_{tj}) \quad for \space n = 1, \dots, N \end{equation}\]
and \(y_{tn} = log(p_{tn})\).
The matrix \([I_{N} - F]\) is singular so a normalisation must be used to solve the system of equations. IndexNumR uses the method discussed in (W. Erwin Diewert and Fox 2017); \(\beta_{N}\) is assumed to be zero and the last equation is dropped to solve for the remaining coefficients.
The multilateral indexes are normalised by dividing by the first term, to give an index for the first \(w\) periods that starts at 1. If the index only covers \(w\) periods then no further calculation is required. However, if there are \(T>w\) periods in the dataset then the index must be extended.
Extending a multilateral index can be done in a multitude of ways. Statistical agencies generally do not revise price indices like the consumer price index, so the methods offered by IndexNumR to extend multilateral indexes are methods that do not lead to revisions. More specifically, these are called splicing methods and the options available are the movement, window, half, mean, fbew (Fixed Base Expanding Window), fbmw (Fixed Base Moving Window), wisp (window splice on published data), hasp (half-splice on published data) and mean splice on published data. The idea behind most of these methods is that we start by moving the window forward by one period and calculate the index for the new window. There will be \(w-1\) overlapping periods between the initial index and the index computed on the window that has been rolled forward one period. Any one of these overlapping periods can be used to extend the multilateral index. The variants of window, half and mean splice that are on published data use the same method as the classical counterparts, but splice onto the published series instead of the previously calculated window.
Let \(P_{OLD}\) be the index computed over periods \(1\) to \(w\) and let \(P_{NEW}\) be the index computed over the window rolled forward one period, from periods \(2\) to \(w+1\). Let the final index simply be \(P\). For the first \(w\) periods \(P = P_{OLD}\), then \(P^{w+1}\) is computed using the splicing methods as follows.
Movement splice (Ivancic, Diewert, and Fox 2011) \[\begin{equation} P^{w+1} = P^{w} \times \frac{P_{NEW}^{w+1}}{P_{NEW}^{w}} \end{equation}\] That is, the movement between the final two periods of the index computed over the new window is used to extend the original index from period \(w\) to \(w+1\).
Window splice (Krsinich 2016) \[\begin{equation} P^{w+1} = P^{w} \times \frac{P_{NEW}^{w+1}/P_{NEW}^{2}}{P_{OLD}^{w}/P_{OLD}^{2}} \end{equation}\] In this case, the ratio of the movement between the first and last periods computed using the new window, to the movement between the first and last periods using the old window is used to extend the original index.
Half splice \[\begin{equation} P^{w+1} = P^{w} \times \frac{P_{NEW}^{w+1}/P_{NEW}^{\frac{w-1}{2}+1}}{P_{OLD}^{w}/P_{OLD}^{\frac{w-1}{2}+1}} \end{equation}\] The half splice uses the period in the middle of the window as the overlapping period to calculate the splice.
Mean splice (Ivancic, Diewert, and Fox 2011) \[\begin{equation} P^{w+1} = P^{w} \times \left( \prod_{t=1}^{w-1} \frac{P_{NEW}^{w+1}/P_{NEW}^{t+1}}{P_{OLD}^{w}/P_{OLD}^{t+1}} \right)^{\frac{1}{(w-1)}} \end{equation}\] The mean splice uses the geometric mean of the movements between the last period and every other period in the window to extend the original index.
FBMW (Claude Lamboray 2017) \[\begin{equation} P^{w+1} = P^{base} \times \frac{P_{NEW}^{w+1}}{P_{NEW}^{base}} \end{equation}\] This method uses a fixed base period that is updated periodically. For example, if the data are monthly then the base period could be each December month, which would be achieved by ensuring that December is the first period in the data and specifying a window length of 13. The splice is calculated by using the movement between the final data point and the base period in the new window to extend the index. If the new data point being calculated is the first period after the base period, then this method produces the same price growth the same as the movement splice. Using the same example, if each December is the base period, then this method will produce the same price growth for January on December as the movement splice.
FBEW (Chessa 2016)
This method uses the same calculation as FBMW, but uses a different set of data for the calculation. It expands the size of the window used to compute the new data point each period to include the latest period of data. If the data are monthly and the base period is each December, then the window used to compute the new data point in January includes only the December and January months. In February it includes the December, January and February months, and so on until the next December where it includes the full 13 months (assuming a window length of 13). This method produces the same result as the FBMW method when the new period being calculated is the base period. Using the same example, if each December is the base period, then each December this will produce the same result as the FBMW method.
The splicing methods are used in this fashion to extend the series up to the final period in the data.
# Assume that the data in CES_sigma_2 are quarterly data with time period
# 1 corresponding to the December quarter.
splices <- c("window", "half", "movement", "mean", "fbew", "fbmw", "wisp", "hasp", "mean_pub")
# estimate a GEKS index using the different splicing methods. Under
# the above assumptions, the window must be 5 to ensure the base period is
# each December quarter.
result <- as.data.frame(lapply(splices, function(x){
GEKSIndex(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
indexMethod = "tornqvist",
window=5,
splice = x)
}))
colnames(result) <- splices
result
## window half movement mean fbew fbmw wisp
## 1 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000
## 2 0.8927770 0.8927770 0.8927770 0.8927770 0.8927770 0.8927770 0.8927770
## 3 1.0781723 1.0781723 1.0781723 1.0781723 1.0781723 1.0781723 1.0781723
## 4 1.1132724 1.1132724 1.1132724 1.1132724 1.1132724 1.1132724 1.1132724
## 5 0.9292537 0.9292537 0.9292537 0.9292537 0.9292537 0.9292537 0.9292537
## 6 1.1784816 1.1789102 1.1772392 1.1783996 1.1746060 1.1772392 1.1784816
## 7 1.1221118 1.1207012 1.1205204 1.1214679 1.1179392 1.1191128 1.1225753
## 8 0.9383833 0.9371778 0.9368880 0.9376351 0.9348518 0.9352286 0.9386945
## 9 1.0951022 1.0942446 1.0941491 1.0947038 1.0914207 1.0914207 1.0914207
## 10 0.9515914 0.9510825 0.9507333 0.9512963 0.9486380 0.9483625 0.9517028
## 11 1.1280620 1.1274679 1.1268628 1.1277415 1.1241728 1.1242435 1.1288942
## 12 1.1336566 1.1327009 1.1323917 1.1332152 1.1297415 1.1297611 1.1354276
## hasp mean_pub
## 1 1.0000000 1.0000000
## 2 0.8927770 0.8927770
## 3 1.0781723 1.0781723
## 4 1.1132724 1.1132724
## 5 0.9292537 0.9292537
## 6 1.1789102 1.1783996
## 7 1.1191128 1.1214484
## 8 0.9383566 0.9373834
## 9 1.0925320 1.0940276
## 10 0.9524902 0.9512759
## 11 1.1253883 1.1276104
## 12 1.1341850 1.1330457
On the assumptions in the above example, periods 1, 5 and 9 are Decembers. Periods 1-5 are computed using full information and periods 6-12 are computed using the splicing methods. Notice that fbew = fbmw in period 9 (December) and fbmw = movement in period 6 (January).
The above index number methods are derived based on a ratio approach,
which decomposes the value change from one period to the next into the
product of a price index and a quantity index. An alternative approach
is to decompose value change into the sum of a price indicator and a
quantity indicator. The theory dates back to the 1920s, and an excellent
paper on this approach has been written by Diewert (Walter E. Diewert 2005a). There are a number of
methods available for computing the indicator, and IndexNumR
exposes the following, via the priceIndicator
function:
Laspeyres indicator \[\begin{equation} I(p^{t-1}, p^{t}) = \sum_{n=1}^{N}q_{n}^{t-1}\times(p_{n}^{t}-p_{n}^{t-1}) \end{equation}\]
Paasche indicator \[\begin{equation} I(p^{t-1}, p^{t}) = \sum_{n=1}^{N}q_{n}^{t}\times(p_{n}^{t}-p_{n}^{t-1}) \end{equation}\]
Bennet indicator (Bennet 1920) \[\begin{equation} I(p^{t-1}, p^{t}) = \sum_{n=1}^{N} \frac{(q_{n}^{t}+q_{n}^{t-1})}{2} \times(p_{n}^{t}-p_{n}^{t-1}) \end{equation}\]
Montgomery indicator (Montgomery 1929) \[\begin{equation} I(p^{t-1}, p^{t}) = \sum_{n=1}^{N} \frac{p_{n}^{t}q_{n}^{t}+p_{n}^{t-1}q_{n}^{t-1}}{log(p_{n}^{t}q_{n}^{t}) - log(p_{n}^{t-1}q_{n}^{t-1})} \times\left(\frac{p_{n}^{t}}{p_{n}^{t-1}}\right) \end{equation}\]
Price indicators for the CES_sigma_2
dataset are as
follows:
methods <- c("laspeyres", "paasche", "bennet", "montgomery")
p <- lapply(methods, function(x) {priceIndicator(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
method = x)})
as.data.frame(p, col.names = methods)
## laspeyres paasche bennet montgomery
## 1 NA NA NA NA
## 2 -0.3269231 -3.23451167 -1.78071737 -1.27874802
## 3 4.3441768 1.19889566 2.77153621 2.23764163
## 4 0.4061429 0.33835480 0.37224887 0.37329461
## 5 -0.8066580 -5.65501233 -3.23083515 -2.35138599
## 6 5.8451382 1.89061744 3.86787782 3.23912451
## 7 -0.6114830 -0.72404798 -0.66776546 -0.66571059
## 8 -1.6992746 -4.76683536 -3.23305498 -2.74535253
## 9 4.5203554 1.38856453 2.95445995 2.45168559
## 10 -1.0274652 -4.49985294 -2.76365909 -2.28761791
## 11 4.9221471 1.65051935 3.28633320 2.85483403
## 12 0.1396069 0.02503502 0.08232098 0.08391295
Quantity indicators can also be produced using the same methods as
outlined above via the quantityIndicator
function. This
allows for the value change from one period to the next to be decomposed
into price and quantity movements. To facilitate this,
IndexNumR contains the valueDecomposition
function, which can be used as follows to produce a decomposition of the
value change for CES_sigma_2 using a Bennet indicator:
valueDecomposition(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
priceMethod = "bennet")
## price quantity changes values
## 1 NA NA NA NA
## 2 -1.78071737 4.7807174 3 13
## 3 2.77153621 -4.7715362 -2 11
## 4 0.37224887 0.6277511 1 12
## 5 -3.23083515 6.2308351 3 15
## 6 3.86787782 -5.8678778 -2 13
## 7 -0.66776546 1.6677655 1 14
## 8 -3.23305498 6.2330550 3 17
## 9 2.95445995 -4.9544600 -2 15
## 10 -2.76365909 5.7636591 3 18
## 11 3.28633320 -5.2863332 -2 16
## 12 0.08232098 0.9176790 1 17
Note that for this decomposition, the method is specified for the price indicator and IndexNumR uses the appropriate quantity indicator. For Bennet and Montgomery indicators the same method is used for the quantity indicator as for the price indicator. If a Laspeyres price indicator is requested then the corresponding volume indicator is a Paasche indicator. The reverse is true if the Paasche indicator is used for prices.
If a variable is available in the data set that identifies the group
to which a product belongs, it is possible to estimate indexes on each
of the groups in the sample using the function
groupIndexes
. An example is if products come from different
geographic regions, or belong to different product categories.
groupIndexes
will split the data into the different groups
and estimate a price index on each group. Any of the price index
functions can be used by specifying the indexFunction
parameter and then supplying the arguments to the price index function
as a named list.
# add a group variable to the CES_sigma_2 dataset
# products 1 and 2 will be in group 1, products 3 and 4 in group 2
df <- CES_sigma_2
df$group <- c(rep(1, 24), rep(2, 24))
# put the arguments to the priceIndex function into a named list
argsList <- list(x = df, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID",
indexMethod = "fisher", output = "chained")
# estimate a bilateral chained fisher index on the groups
groupIndexes("group", "priceIndex", argsList)
## [[1]]
## prices time group
## 1 1.0000000 1 1
## 2 0.5877029 2 1
## 3 0.9380405 3 1
## 4 0.9389789 4 1
## 5 0.9349687 5 1
## 6 0.9341755 6 1
## 7 0.9316160 7 1
## 8 0.6112508 8 1
## 9 0.9039828 9 1
## 10 0.9039828 10 1
## 11 0.8944109 11 1
## 12 0.8797538 12 1
##
## [[2]]
## prices time group
## 1 1.0000000 1 2
## 2 1.0661653 2 2
## 3 1.1246802 3 2
## 4 1.1750587 4 2
## 5 0.9188633 5 2
## 6 1.2641295 6 2
## 7 1.1815301 7 2
## 8 1.1086856 8 2
## 9 1.1552277 9 2
## 10 0.9533554 10 2
## 11 1.2068426 11 2
## 12 1.2237077 12 2
# put the arguments for the GEKSIndex function in a named list
argsGEKS <- list(x = df, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID",
indexMethod = "fisher", window = 12)
# estimate a GEKS index on the groups
groupIndexes("group", "GEKSIndex", argsGEKS)
## [[1]]
## prices time group
## 1 1.0000000 1 1
## 2 0.6008332 2 1
## 3 0.9469454 3 1
## 4 0.9468472 4 1
## 5 0.9423350 5 1
## 6 0.9409859 6 1
## 7 0.9378510 7 1
## 8 0.6170438 8 1
## 9 0.9111127 9 1
## 10 0.9104236 10 1
## 11 0.9002828 11 1
## 12 0.8851572 12 1
##
## [[2]]
## prices time group
## 1 1.0000000 1 2
## 2 1.0585842 2 2
## 3 1.1114981 3 2
## 4 1.1577551 4 2
## 5 0.9040366 5 2
## 6 1.2531574 6 2
## 7 1.1713514 7 2
## 8 1.0996714 8 2
## 9 1.1441569 9 2
## 10 0.9354981 10 2
## 11 1.1960967 11 2
## 12 1.2099160 12 2
Year-over-year indexes are those that calculate the price change
between the same periods across years. If the data are monthly then
there are twelve indexes; one for each month of the year. Each element
of the January index would measure the price movement from January in
the base year to January in the comparison year. The second index does
the same for February, and so on. IndexNumR provides the
function yearOverYearIndexes
to estimate these, given the
frequency as either ‘monthly’ or ‘quarterly’. This is effectively a form
of group index, where the month or quarter gives the group to which the
product belongs. IndexNumR will create a group variable based
on the supplied frequency and call the groupIndexes
function to estimate the indexes. The data must be structured in the
frequency that you specify (if you specify ‘quarterly’ as the frequency
in the yearOverYearIndexes
function, the time period
variable in the data set must be quarterly).
The output from the yearOverYearIndexes
function will
have a column for the month or quarter. The quarter labelled as quarter
1 will have been constructed from the time periods 1, 5, 9, 13, etc of
the data set. The quarter labelled 2 would have been estimated from time
periods 2, 6, 10, 14, etc of the data set.
# Assume the CES_sigma_2 data are quarterly observations over three years.
# This results in 4 indexes (one for each quarter) of 3 periods each.
# Estimate year-over-year chained fisher indexes.
argsList <- list(x = CES_sigma_2, pvar = "prices", qvar = "quantities", pervar = "time",
prodID = "prodID", indexMethod = "fisher", output = "chained")
yearOverYearIndexes("quarterly", "priceIndex", argsList)
## [[1]]
## prices time quarter
## 1 1.000000 1 1
## 2 0.892041 2 1
## 3 1.052713 3 1
##
## [[2]]
## prices time quarter
## 1 1.000000 1 2
## 2 1.324124 2 2
## 3 1.058836 3 2
##
## [[3]]
## prices time quarter
## 1 1.000000 1 3
## 2 1.040759 2 3
## 3 1.046379 3 3
##
## [[4]]
## prices time quarter
## 1 1.0000000 1 4
## 2 0.8388911 2 4
## 3 1.0246700 3 4
IndexNumR is hosted on Github at https://github.com/grahamjwhite/IndexNumR. There users can find instructions to install the development version directly from Github, as well as report and view bugs or improvements.