Loading [MathJax]/jax/output/HTML-CSS/jax.js

Math Equations

Friday, July 20, 2018

Sums of powers of roots

It is known since Newton's time that sums of the form σk:=xki are polynomial expressions in the coefficients of f=(zxi). What follows is my best understanding of this. We use a formal power series approach, and therefore define Nf:=σkzk. I want to prove that znf(1z)Nf(z)=zn1f(1z). (It is written this way since znf(1z) and zn1f(1z) are polynomials.)

The proof uses logarithmic differentiation and geometric series expansion only. zn1f(1z)znf(1z)=1z(logf)(1z)=1z11zxi=11xiz=xkizk=σkzk=Nf(z) For example suppose f=(zx0)(zx1)=(z2)(z3)=z25z+6. The first few sums are σ0=2,σ1=5,σ2=13,σ3=35,σ4=97 so Nf2+5z+13z2+35z3+97z4. Furthermore znf(1z)=15z+6z2 and zn1f(1z)=25z. We may test (1) by computing its left- and right-hand sides up to fourth order: (2+5z+13z2+35z3+97z4)(15z+6z2)=(25z)modz5. Conversely, we can compute more terms σk by polynomial division: Nf=25z15z+6z2=0k<mσkzk+zmremainder polynomial15z+6z2 In python:
>>> z = sympy.var('z')
>>> Nf = sympy.fps((2-5*z)/(1-5*z+6*z*z))
>>> Nf.truncate(10)
2 + 5*z + 13*z**2 + 35*z**3 + 97*z**4 + 275*z**5
+ 793*z**6 + 2315*z**7 + 6817*z**8 + 20195*z**9 + O(z**10)

Wednesday, March 7, 2018

Demuxing a random variable

Suppose f=ag+(1a)h, a convex combination of pdfs. Then f is a pdf.

Suppose X has f as pdf. For any evaluation x of X, produce a random variable with conditional distribution, YBernoulli((1a)h(x)(ag(x)+(1a)h(x)). Then X|(Y=0)g and X|(Y=1)h since p(x|0)=p(0|x)p(x)xp(0|x)p(x)=ag(x)f(x)f(x)xag(x)f(x)f(x)=ag(x)xag(x)=ag(x)a1=g(x)p(x|1)=p(1|x)p(x)xp(1|x)p(x)=(1a)h(x)f(x)f(x)x(1a)h(x)f(x)f(x)=(1a)h(x)x(1a)h(x)=(1a)h(x)(1a)1=h(x).

The utility of this construction is as follows: It lets you turn a probabilistic experiment with definite outcomes (say the probabilities are [0.2,0.6,0.2]) into an experiment with outcomes which are again probability distributions, such as [0.4,0.6,0] and [0,0.6,0.4], each with probability 12. This blurs the distinction between deterministic variables (or often equivalently zero variance, zero entropy) and nondeterministic variables (positive variance, positive entropy).

In fact, it suggests that entropy can be calculated from the coefficients of an arbitrary convex combination. In the example just given, the entropy of [0.2,0.6,0.2]) with respect to [0.4,0.6,0] and [0,0.6,0.4] is the entropy of [0.5,0.5] with respect to [1,0] and [0,1], i.e. 12log2+12log2=1.

And why not try affine combinations! [0.2,0.6,0.2]=2[13,13,13]1[715,115,715]. The associated entropy in base 2 is 2log2+1log(1)=2+πiloge=2+4.53236i.