|
|
|
Harmonic Analysis |
|
Spectrum Analysis |
|
Autocorrelation Function |
|
Degree of Freedom |
|
Data Window |
|
Significance Tests |
|
|
|
|
Some major purposes of the statistical analysis
of time series are: |
|
To
understand the variability of the time series. |
|
To
identify the regular and irregular oscillations of the time series. |
|
To
describe the characteristics of these oscillations. |
|
To
understand the physical processes that give rise to each of these
oscillations. |
|
To achieve the above, we need to: |
|
Identify the regular cycle (harmonic analysis) |
|
Estimate the importance of these cycles (power
spectral analysis) |
|
Isolate or remove these cycles (filtering) |
|
|
|
|
Harmonic analysis is used to identify the
periodic (regular) variations in geophysical time series. |
|
If we have N observations of (xi, yi),
the time series y(t) can be approximated by cosine and sine functions : |
|
|
|
|
|
|
|
How many harmonics (cosine/sine functions) do we
need? |
|
In
general, if the number of observations is N, the number of harmonic equal
to N/2 (pairs of cosine and sine functions). |
|
|
|
|
As an example, if the time series is the
monthly-mean temperature from January to December: |
|
N=12, Dt =1 month,
and T=12´ Dt = 12 month = one year |
|
1s
harmonic (k=1) è annual harmonic (one cycle through the
period) |
|
|
|
2nd
harmonic (k=2) è semi-annual harmonic (2 cycles through the
period) |
|
|
|
Last
harmonic (k=N/2) è the smallest period to be included. |
|
|
|
|
|
|
In Vector Form: |
|
|
|
|
|
|
|
In Continuous Function Form |
|
|
|
|
|
A Set of Orthogonal Functions fn(x) |
|
|
|
|
|
|
If we want to regress y with more than one
variables (x1, x2, x3,…..xn): |
|
|
|
After perform the least-square fit and remove
means from all variables: |
|
|
|
|
|
Solve the following matrix to obtain the
regression coefficients: a1, a2, a3, a4,…..,
an: |
|
|
|
|
Because of the orthogonal property of cosine and
sine function, all the coefficients A and B can be computed independently
(you don’t need to know other Ai=2,3,…N/2 or Bi=1, 2, 3…N/2
in order to get A1, for example). |
|
This is a multiple regression case. Using
least-square fit, we can show that: |
|
|
|
For
k=1,N/2 |
|
|
|
|
|
|
Using the following relation, we can combine the
sine and cosine components of the harmonic to determine the amplitude of
each harmonic. |
|
|
|
|
|
|
|
With this combined form, the harmonic analysis
of y(t) can be rewritten as: |
|
|
|
|
|
|
What is the fraction of the variance (of y)
explained by a single harmonic? |
|
Remember we have shown in the regression
analysis that the fraction is equal to the square of the correlation
coefficient between this harmonic and y: |
|
|
|
|
|
It can be shown that this fraction is |
|
|
|
|
|
|
Since the harmonics are all uncorrelated, no two
harmonics can explain the same part of the variance of y. |
|
In other words, the variances explained by the
different harmonics can be added. |
|
We can add up the fractions to determine how
many harmonics we need to explain most of the variations in the time series
of y. |
|
|
|
|
By plotting the amplitude of the harmonics as a
function of k, we produce the “power spectrum” of the time series y. |
|
The meaning of the spectrum is that it shows the
contribution of each harmonic to the total variance. |
|
If t is time, then we get the frequency
spectrum. |
|
If t is distance, then we get the wavenumber
spectrum. |
|
|
|
|
Integer values of k have no specific meaning.
They are determined based on the length of the observation period T (=NDt): |
|
k =
(0, 1, 2, 3,..N/2) cycles during the period T. |
|
Since we use N observations to determine a mean
and N/2 line spectra, each line spectrum has only about 2 degrees of
freedom. With such small dof, the line spectrum is not likely to be
reproduced from one sampling interval to the other. |
|
Also, most geophysical “signals” that we are
interested in and wish to study are not truly “periodic”. A lot of them are
just “quasi-periodic”, for example ENSO. So we are more interested in the
spectrum over a “band” of frequencies, not at a specific frequency. |
|
|
|
|
So we need a “continuous spectrum” that tells us
the variance of y(t) per unit frequency (wavenumber) interval: |
|
|
|
|
|
|
|
k* is called the “Nyquist frequency”, which has
a frequency of one cycle per 2Dt. (This is the k=N/2 harmonics). |
|
The Nyquist frequency is the highest frequency
can be resolved by the given spacing of the data point. |
|
|
|
|
The variances at frequency higher than the
Nyquist frequency (k>k*) will be “aliased” into lower frequency (k<k*) in the power
spectrum. This is the so-called “aliasing problem”. |
|
This is a problem if there are large variances
in the data that have frequencies smaller than k*. |
|
|
|
|
|
|
(1) Direct Method (a more popular method) |
|
Step
1: Perform Fourier transform of y(t) to get C2(k) |
|
Step 2: smooth C2(k) by
averaging a few adjacent frequencies together. |
|
or by averaging the spectra of a few time
series together. |
|
è both
ways smooth a line spectrum to a continuous spectrum and |
|
increase the degrees of freedom of the spectrum. |
|
|
|
|
|
|
Example 1 – smooth over frequency bands |
|
A
time series has 900 days of record. If we do a Fourier analysis then the
bandwidth will be 1/900 day-1, and each of the 450 spectral
estimates will have 2 degrees of freedom. If we averaged each 10 adjacent
estimates together, then the bandwidth will be 1/90 day-1 and
each estimate will have 20 d.o.f. |
|
Example 2 – smooth over spectra of several time
series |
|
Suppose we have 10 time series of 900 days. If we compute spectra
for each of these and then average the individual spectral estimates for
each frequency over the sample of 10 spectra, then we can derive a spectrum
with a bandwidth of 1/900 days-1 where each spectral estimate has 20 degrees of freedom. |
|
|
|
|
|
|
|
|
(2) Time-Lag Correlation Method |
|
It
can be shown that the autocorrelation function and power spectrum are
Fourier transform of each other. So we can obtain the continuous spectrum
by by performing harmonic analysis on the lag correlation function on the
interval -TL £ t £ TL. |
|
|
|
|
|
|
Bandwidth (Df)= width of the frequency band è resolution
of spectrum |
|
Df = 1/N
(cycle per unit time interval) |
|
For example, if a time series has 900
monthly-mean data: |
|
bandwidth = 1/900 (cycle per month). |
|
Nyquist frequency = ½ (cycle per month) |
|
Total number of frequency bands = (0-Nyquist frequency)/bandwidth |
|
=
(0.5)/(1/900) = 450 = N/2 |
|
Each
frequency band has about 2 degree of freedom. |
|
If we average several bands together, we
increase the degrees of freedom but reduce the resolution (larger
bandwidth). |
|
|
|
|
With the time-lag correlation method, the
bandwidth of the power spectrum is determined by the maximum time lag (L)
used in the calculation: |
|
Df = 1 cycle/(2LDt). |
|
Number of frequency band = (Nyquist frequency –
0) / Df |
|
= (2 Dt)-1
/ (2L Dt)-1 = L |
|
Number of degrees of freedom = N/(number of
bands) |
|
= N/L |
|
|
|
|
Originally, autocorrelation/autocovariance
function is used to estimate the dominant periods in the time series. |
|
The autocovariance is the covariance of a
variable with itself at some other time, measured by a time lag (or lead) t. |
|
The autocovariance as a function of the time lag
(t and L): |
|
|
|
|
|
|
The Autocorrelation function is the normalized
autocovariance function: |
|
|
|
|
|
|
|
Symmetric property of the
autocovarinace/autocorrelation function: |
|
f(-t)=f(t) and r(-t)=r(t). |
|
|
|
|
|
|
If a = o in the red noise, then we have a white
noise: |
|
x(t) = e(t) è a series of random numbers |
|
The autocorrelation function of white noise is: |
|
r(t)=d(0) è non-zero only at t=0 |
|
White noise has no prediction value. |
|
Red
noise is useful for persistence forecasts. |
|
|
|
|
|
|
|
|
|
If the lag is small, the autocorrelation is
still positive for many geophysical variables. |
|
This means there is some “persistence” in the
variables. |
|
Therefore, if there are N observations in
sequence, they can not be considered independent from each other. |
|
This means the degree of freedom is less than N. |
|
|
|
|
The typical autocorrelation function tells us
that data points in a time series are not independent from each other. |
|
è The
degree of freedom is less than the number of data points (N). |
|
Can we estimate the degree of freedom from the
autocorrelation function? |
|
For a time series of red noise, it has been
suggested that the degree of freedom can be determined as following: |
|
N* = N Dt / (2Te). |
|
Here
Te is the e-folding decay time of autocorrelation (where
autocorrelation drops to 1/e). Dt is the time interval between data. |
|
The number of degrees is only half of the number
of e-folding times of the data. |
|
|
|
|
For red noise, we know: |
|
r(t)=exp(-t/Te) è Te = - t / ln(r(t)) |
|
If we know the autocorrelation at t=Dt, then we
can find out that |
|
|
|
For example: |
|
|
|
|
This theory is important for power spectrum
analysis and for time filtering to be discussed later. |
|
The theory states that the square of the time
series integrated over time is equal to the square (inner product) of the
Fourier transform integrated over frequency: |
|
|
|
|
|
|
|
Here F1(w)/F2(w) is the
Fourier transform of f1(t)/f2(t). |
|
|
|
|
Let’s use the Parseval’s theory to calculate the
power spectrum of red noise. |
|
We already showed that the autocorrelation
function of the red noise is: |
|
|
|
|
|
|
|
By performing the Fourier transform of the
autocorrelation function, we obtain the power spectrum of the red noise: |
|
|
|
|
|
|
The Fourier transform obtains the “true” power
spectrum from a time series with a infinite time domain. In real cases, the
time series has a finite length. |
|
It is like that we obtain the finite time series
from the infinite time domain through a “data window: |
|
|
|
|
|
|
|
|
|
|
|
|
|
How does the data window affect the power
spectrum? |
|
|
|
|
If the infinite time series is f(t) and the
sample time series is g(t), then the power spectrum calculated from the
sample is related to the true spectrum in the following way: |
|
|
|
|
|
|
|
Based on the “Convolution Theory” |
|
|
|
The sample spectrum is not equal to the true
spectrum but weighted by the spectrum of the data window used: |
|
|
|
|
Square data window is: |
|
w(t) = 1 within the window domain |
|
= 0 everywhere else. |
|
The data window has the following weighting
effects on the true spectrum: |
|
|
|
|
The square window smooth the true spectrum. |
|
The degree of the smoothing is determined by the
window length (L). |
|
The shorter the window length, the stronger the
smoothing will be. |
|
In addition to the smoothing effect, data window
also cause “spectral leakage”. |
|
This leakage will introduce spurious
oscillations at higher and lower frequencies and are out of phase with the
true oscillation. |
|
|
|
|
How do we reduce the side lobes associated with
the data window? |
|
è A
tapered data window. |
|
|
|
|
Produce a narrow central lobe |
|
è require
a larger T (the length of data window) |
|
Produce a insignificant side lobes |
|
è require
a smooth data window without sharp corners |
|
A rectangular or Bartlett window leaves the time
series undistorted, but can seriously distort the frequency spectrum. |
|
A
tapered window distorts the time series but may yield a more representative
frequency spectrum. |
|
|
|
|
|
|
Bartlett (square or rectangular) window |
|
|
|
|
|
|
|
|
|
|
|
|
|
This is the most commonly used window, but we
use it without knowing we are using it. |
|
The Bartlett window has a serious side lobe
problem. Frequencies that are outside the range of frequencies actually
resolved can have too strong an influence on the power spectra at the
frequencies resolved. |
|
|
|
|
The cosine bell window is perhaps the most
frequently used window in meteorological applications. |
|
|
|
|
|
|
|
|
|
|
Null Hypothesis : the time series is not
periodic in the region of |
|
interest, but simply noise. |
|
We thus compare amplitude of a spectral peak to
a background value determined by a red noise fit to the spectrum. |
|
Use F-Test: |
|
|
|
|
Time filtering technique is used to remove or to
retain variations at particular bands of frequencies from the time series. |
|
There are three types of filtering: |
|
(1) High-Pass Filtering |
|
keep high-frequency parts of the variations and remove low-frequency
parts of the variations. |
|
(2) Low-Pass Filtering |
|
keep low-frequency and remove high-frequency parts of the
variations. |
|
(3) Band-Pass Filtering |
|
remove both higher and lower frequencies and keep only certain
frequency bands of the variations. |
|
|
|
|
Time filters are the same as the data window we
have discussed earlier. |
|
|
|
|
|
By performing Fourier transform, we know that: |
|
|
|
The ration between the filtered and original
power spectrum is called the “response function”: |
|
|
|
If R(w)=1 è the original amplitude at frequency w is kept. |
|
R(w)=0 è the
original amplitude at frequency w is filtered out. |
|
|
|
|
The ideal filter should have a response of 1
over the frequency bands we want to keep and a response of zero over the
frequency bands we want to remove: |
|
|