Here are some dominoes based on [1]. The idea behind this dataset is that there are two "patterns" in the data: the MNIST image and the CIFAR image.
Notice that some of the dominoes have only one "pattern" present. By tracking training/test loss on these one-sided dominoes, we can tease apart how quickly the model learns the two different patterns.
We'd like to compare these pattern-learning curves to the curves predicted by the toy model of [2]. In particular, we'd like to compare predictions to the empirical curves as we change the relevant macroscopic parameters (e.g., prevalence, reliability, and simplicity1).
Which means running sweeps over these macroscopic parameters.
Prevalence
What happens as we change the relative incidence of MNIST vs CIFAR images in the dataset? We can accomplish this by varying the frequency of one-sided MNIST dominoes vs. one-sided CIFAR dominoes.
We control two parameters:
pm, the probability of a domino containing an MNIST image (either one-sided or two-sided),
pc, the probability of a domino containing a CIFAR image (either one-sided or two-sided), and
Two parameters are fixed by our datasets:
Nm, the number of samples in the MNIST dataset.
Nc, the number of samples in the CIFAR dataset.
Given these parameters, we have to determine:
rm0, the fraction of the MNIST dataset that we reject,
rm1, the fraction of the MNIST dataset that ends up in one-sided dominoes,
rm2, the fraction of the MNIST dataset that ends up in two-sided dominoes,
and, similarly, rc0, rc1, and rc2 for the CIFAR dataset.
Here's the corresponding Sankey diagram (in terms of numbers of samples rather than probabilities, but it's totally equivalent).
Six unknowns means we need six constraints.
We get the first two from the requirement that probabilities are normalized,
rm0+rm1+rm2=rc0+rc1+rc2=1,
and the another from the double dominoes requiring the sample number of samples from both datasets,
rm2Nm=rc2Nc.
Out of convenience, we'll introduce an additional variable, which we immediately constrain,
N=rc1Nc+rm1Nm+rm2Nm,
the number of samples in the resulting dominoes dataset.
We get the last three constraints from our choices of pm, pc, and p1:
So unfortunately, this yields trivial answers where rm0=rc0=1 and all other values are 0. The solution seems to be to just allow there to be empty dominoes.
Reliability
We can vary the reliability by inserting "wrong" dominoes. I.e.: with some probability make either of the two sides display the incorrect class for the label.
Simplicity
One of the downsides of this task is that we don't have much control over the simplicity of the feature. MNIST is simpler than CIFAR, sure, but how much? How might we control this?
I'm running a series of experiments that involve some variation of: (1) perturb a weight initialization; (2) train the perturbed and baseline models in parallel, and (3) track how the perturbation grows/shrinks over time.
Naively, if we're interested in a perturbation analysis of the choice of weight initialization, we prepare some baseline initialization, w0, and then apply i.i.d. Gaussian noise, δ, to each of its elements, δi∼N(0,ϵ2). (If we want, we can let this vary layer-by-layer and let it depend on, for example, the norm of the layer it's being applied to.)
The problem with this strategy is that the perturbed weights w=w0+δ are, in general, no longer sampled from the same distribution as the baseline weights.
There is nothing wrong with this per se, but it introduces a possible confounder (the thickness). This is especially relevant if we're interested specifically in the question of how behavior changes with the size of a perturbation, this problem introduces a possible confounder. As responsible experimentalists, we don't like confounders.
Fortunately, there's an easy way to "clean up" Kaiming He to make it better suited to this perturbative analysis.
Kaiming initialization lives in a hyperspherical shell
Consider a matrix, w(l), representing the weights of a particular layer l with shape (Din(l),Dout(l+1)). Din(l) is also called the fan-in of the layer, and Din(l+1) the fan-out. For ease of presentation, we'll ignore the bias, though the following reasoning applies equally well to the bias.
We're interested in the vectorized form of this matrix, w(l)∈RD(l), where D(l)=Din(l)×Dout(l+1).
In Kaiming initialization, we sample the components, wi(l), of this vector, i.i.d. from a normal distribution with mean 0 and variance σ2 (where σ2=Din(l)2).
Geometrically, this is equivalent to sampling from a hyperspherical shell, SD−1 with radius Dσ and (fuzzy) thickness, δ. (Ok, so technically, because the radius can vary from layer-to-layer, it's a hyperellipsoidal shell.)
This follows from some straightforward algebra (dropping the superscript l for simplicity):
where the last equality follows from the choice of σ for Kaiming initialization.
This means that for suitably wide networks (Din(l)→∞), the thickness of this shell goes to 0.
Taking the thickness to 0
This suggests an alternative initialization strategy. What if we immediately take the limit Din(l)→∞, and sample directly from the boundary of a hypersphere with radius Dσ, i.e., modify the shell thickness to be 0.
This can easily be done by sampling each component from a normal distribution with mean 0 and variance 1 and then normalizing the resulting vector to have length Dσ (this is known as the Muller method).
Perturbing Weight initialization
The modification we made to Kaiming initialization was to sample directly from the boundary of a hypersphere, rather than from a hyperspherical shell. This is a more natural choice when conducting a perturbation analysis, because it makes it easier to ensure that the perturbed weights are sampled from the same distribution as the baseline weights.
Geometrically, the intersection of a hypersphere SD of radius w0=∣w0∣ with a hypersphere SD of radius ϵ that is centered at some point on the boundary of the first hypersphere, is a lower-dimensional hypersphere SD−1 of a modified radius ϵ′. If we sample uniformly from this lower-dimensional hypersphere, then the resulting points will follow the same distribution over the original hypersphere.
This suggests a procedure to sample from the intersection of the weight initialization hypersphere and the perturbation hypersphere.
First, we sample from a hypersphere of dimension D−1 and radius ϵ′ (using the same technique we used to sample the baseline weights). From a bit of trigonometry, see figure below, we know that the radius of this hypersphere will be ϵ′=w0cosθ, where θ=cos−1(1−2w02ϵ2).
Next, we rotate the vector so it is orthogonal to the baseline vector w0. This is done with a Householder reflection, H, that maps the current normal vector n^=(0,…,0,1) onto w0:
H=I−2cTcccT,
where
c=n^+w^0,
and w^0=∣w0∣w0 is the unit vector in the direction of the baseline weights.
Implementation note: For the sake of tractability, we directly apply the reflection via:
Hy=y−2cTccTyc.
Finally, we translate the rotated intersection sphere along the baseline vector, so that its boundary goes through the intersection of the two hyperspheres. From the figure above, we find that the translation has the magnitude w0′=w0cosθ.
By the uniform sampling of the intersection sphere and the uniform sampling of the baseline vector, we know that the resulting perturbed vector will have the same distribution as the baseline vector, when restricted to the intersection sphere.
As it turns out, they live out the remainder of their lives on hyperspheres as well. Take the norm of the vectorized weights, w, and plot it as a function of the number of training steps. Across many different choices of weight initialization, this norm will grow (or shrink, depending on weight decay) remarkably consistently.
In the following figure, I've plotted this behavior for 10 independent initializations across 70 epochs of training a simple MNIST classifier. The same results hold more generally for a wide range of hyperparameters and architectures.
The norm during training, divided by the norm upon weight initialization. The parameter ϵ controls how far apart (as a fraction of the initial weight norm) the different weight initializations are.
All this was making me think that I disregarded Brownian motion as a model of SGD dynamics a little too quickly. Yes, on lattices/planes, increasing the number of dimensions makes no difference to the overall behavior (beyond decreasing the variance in how displacement grows with time). But maybe that was just the wrong model. Maybe Brownian motion on hyperspheres is considerably different and is actually the thing I should have been looking at.
Before we tackle this in full, let's try to understand Brownian motion on hyperspheres in a few simplifying limits.
Unlike the Euclidean case, where having more than one dimension means you'll never return to the origin, living on a finite sphere means we'll visit every point infinitely many times. The stationary distribution is the uniform distribution over the sphere, which we'll denote U(n) for Sn.
So our expectations end up being relatively simple integrals over the sphere.
The distance from the pole at the bottom to any other point on the surface depends only on the angle from the line intersecting the origin and the pole to the line intersecting the origin and the other point,
z(θ)=(rsinθ)2−(r−rcosθ)2=2rsin(2θ).
This symmetry means we only have to consider a single integral from the bottom to to the top of Sn−1 "slices." Each slice has as much probability associated to it as the corresponding Sn−1 has surface area times the measure associated to θ (=rdθ). This surface area varies with the angle from the center, as does the distance of that point from the origin. Put it together and you get some nasty integrals.
From this point on, I believe I made a mistake somewhere (the analysis for n→∞ is correct).
If we let Sn(x) denote the surface area of Sn with radius r, we can fill in the formula from Wikipedia:
Sn−1(x)=Γ(2n)2π2nxn−1.
Filling in x(θ)=rsinθ, we get an expression for the surface area of each slice as a function of θ,
Sn−1(x(θ))=Sn−1(r)sinn−1(θ),
so the measure associated to each slice is
dμ(θ)=Sn−1(r)sinn−1(θ)rdθ.
n→∞
We can simplify further, as n→∞, the sinn−1(θ) will tend towards a "comb function," that is zero everywhere except for θ=(m+1/2)π for integer m, where it is equal to 1 if m is even, and −1 if m is odd.
Within the above integration limits (0,π), this means all of the probability becomes concentrated at the middle, where θ=π/2.
So in the infinite-dimensional limit, the expected distance from the origin is the distance between a point on the middle and one of its poles, which is 2r.
General case
An important lesson I've learned here is: "try more keyword variations in Google Scholar and Elicit before going off and solving these nasty integrals yourself because you will probably make a mistake somewhere and someone else has probably already done the thing you're trying to do."
Anyway, here's the thing where someone else has already done the thing I was trying to do. [1] gives us the following form for the expected cosine of the angular displacement with time:
⟨cosθ(t)⟩=exp(−D(n−1)t/R2).
A little trig tells us that the expected displacement in the embedding space is
⟨z(t)⟩=r2⋅(1−⟨cosθ⟩),
which gives us a set of nice curves depending on the diffusion constant D:
The major difference with Brownian motion on the plane is that expected displacement asymptotes at the limit we argued for above (2r).
I've been looking for a model that predicts square root like growth at the very start, followed by a long period of nearly linear growth, followed by asymptotic behavior. Eyeballing this, it looked like my searching for the long plateau of linear growth had come to an end.
Of course, this was entirely wishful thinking. Human eyes aren't great discriminators of lines. That "straight line" is actually pretty much parabolic. As I should have known (from the Euclidean nature of manifolds), the initial behavior is incredibly well modeled by a square root.
As you get further away from the nearly flat regime, the concavity only increases. This is not the curve I was looking for.