Skip to content
Advertisement

Why non-linear response to random values is always positive?

I’m creating a non-linear response to a series of random values from {-1, +1} using a simple Volterra kernel: Volterra kernel

With a zero mean for a(k) values I would expect r(k) to have a zero mean as well for arbitrary w values. However, I get r(k) with an always positive mean value, while a mean for a(k) behaves as expected: is close to zero and changes sign from run to run.

Why don’t I get a similar behavior for r(k)? Is it because a(k) are pseudo-random and two different values from a are not actually independent?

Here is a code that I use:

import numpy as np
import matplotlib.pyplot as plt
import itertools

# array of random values {-1, 1}
A = np.random.randint(2, size=10000)
A = [x*2 - 1  for x in A]

# array of random weights
M = 3
w = np.random.rand(int(M*(M+1)/2))

# non-linear response to random values
R = []
for i in range(M, len(A)):
    vals = np.asarray([np.prod(x) for x in itertools.combinations_with_replacement(A[i-M:i], 2)])
    R.append(np.dot(vals, w))

print(np.mean(A), np.var(A))
print(np.mean(R), np.var(R))

Edit:

Check on whether the quadratic form, which is employed by the kernel, is definite-positive fails (i.e. there are negative principal minors). The code to do the check:

import scipy.linalg as lin
wm = np.zeros((M,M))
w_index = 0
# check Sylvester's criterion
# reconstruct weights for quadratic form
for r in range(0,M):
    for c in range(r,M):
        wm[r,c] += w[w_index]/2
        wm[c,r] += w[w_index]/2
        w_index += 1

# check principal minors
for r in range(0,M):
    if lin.det(wm[:r+1,:r+1])<0: print('found negative principal minor of order', r)

Advertisement

Answer

This question ended up being about math, and not programming. Nevertheless, this is my own answer.

Simply put, when indices of a(k-i) are equal, the variables in the resulting product are not independent (because they are equal). Such a product does not have a zero mean, hence the mean value of the whole equation is shifted into the positive range.

Formally, implemented function is a quadratic form, for which a mean value can be calculated by

E[A^TWA]=tr[WSigma]+mu^TWmu,

where mu and Sigma are a vector of expected values and a covariance matrix for a vector A respectively.

Having a zero vector mu leaves only the first part of this equation. The resulting estimate can be done with the following code. And it actually gives values that are close to the statistical results in the question.

# Estimate R mean
# sum weights in a main diagonal for quadratic form (matrix trace)
w_sum = 0
w_index = 0
for r in range(0,M):
    for c in range(r,M):
        if r==c: w_sum += w[w_index]
        w_index += 1
Rmean_est = np.var(A) * w_sum
print(Rmean_est)

This estimate uses an assumption, that a elements with different indices are independent. Any implicit dependency due to the nature of pseudo-random generator, if present, probably gives only a slight change to the resulting estimate.

User contributions licensed under: CC BY-SA
5 People found this is helpful
Advertisement