Skip to content(if available)orjump to list(if available)

Functions Are Vectors (2023)

Functions Are Vectors (2023)

49 comments

·July 6, 2025

dang

This previous thread was also good: Functions are vectors - https://news.ycombinator.com/item?id=36921446 - July 2023 (120 comments)

jschveibinz

An engineering, signal processing extension/perspective:

An infinite sequence approximates a general function, as described in the article (see the slider bar example). In signal processing applications, functions can be considered (or forced) to be bandlimited so a much lower-order representation (i.e. vector) suffices:

- The subspace of bandlimited functions is much smaller than the full L^2 space - It has a countable orthonormal basis (e.g., shifted sinc functions) - The function can be written as (with sinc functions):

x(t) = \sum_{n=-\infty}^{\infty} f(nT) \cdot \text{sinc}\left( \frac{t - nT}{T} \right)

- This is analogous to expressing a vector in a finite-dimensional subspace using a basis (e.g. sinc)

Discrete-time signal processing is useful for comp-sci applications like audio, SDR, trading data, etc.

MalbertKerman

The jump from spherical harmonics to eigenfunctions on a general mesh, and the specific example mesh chosen, might be the finest mathematical joke I've seen this decade.

sixo

Would you explain the joke for the rest of us?

MalbertKerman

It's quietly reversing the traditional "We approximate the cow to be a sphere" and showing how the spherical math can, in fact, be generalized to solutions on the cow.

sixo

oh. I did not interpret that blob as a cow. Thanks.

xeonmc

Spherical Haromics approximating Spherical Cows?

dark__paladin

assume spherical cow

sixo

A few questions occur to me while reading this, which I am far from qualified to answer:

- How much of this structure survives if you work on "fuzzy" real numbers? Can you make it work? Where I don't necessarily mean "fuzzy" in the specific technical sense, but in any sense in which a number is defined only up to a margin of error/length scale, which in my mind is similar to "finitism", or "automatic differentiation" in ML, or a "UV cutoff" in physics. I imagine the exact definition will determine how much vectorial structure survives. The obvious answer is that it works like a regular Fourier transform but with a low-pass filter applied, but I imagine this might not be the only answer.

- Then if this is possible, can you carry it across the analogy in the other direction? What would be the equivalent of "fuzzy vectors"?

- If it isn't possible, what similar construction on the fuzzy numbers would get you to the obvious endpoint of a "fourier analysis with a low pass filter pre-applied?"

- The argument arrives at fourier analysis by considering an orthonormal diagonalization of the Laplacian. In linear algebra, SVD applies more generally than diagonalizations—is there an "SVD" for functions?

sitkack

A fuzzy vector is a Gaussian? Thinking of what it would be in 1, 2, 3 and n dimensions.

xeonmc

I’d guess that it would be factored as “nonlinearity”, which might be characterized as some form of harmonic distortion, analogous to clipping nonlinearity of finite-ranged systems?

Perhaps some conjugate relation could be established between finite-range in one domain and finite-resolution in another, in terms of the effect such nonlinearities have on the spectral response.

pvg

nyrikki

The one place that I think the previous discussion lost something important, at least to me with functions.

The popular lens is the porcupine concept when infinite dimensions for functions is often more effective when thought of as around 8:00 in this video.

https://youtu.be/q8gng_2gn70

While that video obviously is not fancy, it will help with building an intuition about fixed points.

Explaining how the dimensions are points needed to describe a functions in a plane and not as much about orthogonal dimensions.

Specifically with fixed points and non-expansive mappings.

Hopefully this helps someone build intuitions.

chongli

I see this a lot with math concepts as they begin to get more abstract: strange visualizations to try to build intuition. I think this is ultimately a dead-end approach which misleads rather than enlightens.

To me, the proper way of continuing to develop intuition is to abandon visualization entirely and start thinking about the math in a linguistic mode. Thus, continuous functions (perhaps on the closed interval [0,1] for example) are vectors precisely because this space of functions meet the criteria for a vector space:

* (+) vector addition where adding two continuous functions on a domain yields another continuous function on that domain

* (.) scalar multiplication where multiplying a continuous function by a real number yields another continuous function with the same domain

* (0) the existence of the zero vector which is simply the function that maps its entire domain of [0,1] to 0 (and we can easily verify that this function is continuous)

We can further verify the other properties of this vector space which are:

* associativity of vector addition

* commutativity of vector addition

* identity element for vector addition (just the zero vector)

* additive inverse elements (just multiply f by -1 to get -f)

* compatibility of scalar multiplication with field multiplication (i.e a(bf) = (ab)f, where a and b are real numbers and f is a function)

* identity element for scalar multiplication (just the number 1)

* distributivity of scalar multiplication over vector addition (so a(f + g) = af + ag)

* distributivity of scalar multiplication over scalar addition (so (a + b)f = af + bf)

So in other words, instead of trying to visualize an infinite-dimensional space, we’re just doing high school algebra with which we should already be familiar. We’re just manipulating symbols on paper and seeing how far the rules take us. This approach can take us much further when we continue on to the ideas of normed vector spaces (abstracting the idea of length), sequences of vectors (a sequence of functions), and Banach spaces (giving us convergence and the existence of limits of sequences of functions).

ajkjk

Funny, I agree that visualizations aren't that useful after a point, but when you said "start thinking about the math in a linguistic mode" I thought you were going to describe what I do, but then you described an entirely different thing! I can't learn math the way you described at all: when things are described by definitions, my eyes glaze over, and nothing is retained. I think the way you are describing filters out a large percentage of people who would enjoy knowing the concepts, leaving only the people whose minds work in that certain way, a fairly small subset of the interested population.

My third way is that I learn math by learning to "talk" in the concepts, which is I think much more common in physics than pure mathematics (and I gravitated to physics because I loved math but can't stand learning it the way math classes wanted me to). For example, thinking of functions as vectors went kinda like this:

* first I learned about vectors in physics and multivariable calculus, where they were arrows in space

* at some point in a differential equations class (while calculating inner products of orthogonal hermite polynomials, iirc) I realized that integrals were like giant dot products of infinite-dimensional vectors, and I was annoyed that nobody had just told me that because I would have gotten it instantly.

* then I had to repair my understanding of the word "vector" (and grumble about the people who had overloaded it). I began to think of vectors as the N=3 case and functions as the N=infinity case of the same concept. Around this time I also learned quantum mechanics where thinking about a list of binary values as a vector ( |000> + |001> + |010> + etc, for example) was common, which made this easier. It also helped that in mechanics we created larger vectors out of tuples of smaller ones: spatial vector always has N=3 dimensions, a pair of spatial vectors is a single 2N = 6-dimensional vector (albeit with different properties under transformations), and that is much easier to think about than a single vector in R^6. It was also easy to compare it to programming, where there was little difference between an array with 3 elements, an array with 100 elements, and a function that computed a value on every positive integer on request.

* once this is the case, the Fourier transform, Laplace transform, etc are trivial consequences of the model. Give me a basis of orthogonal functions and of course I'll write a function in that basis, no problem, no proofs necessary. I'm vaguely aware there are analytic limitations on when it works but they seem like failures of the formalism, not failures of the technique (as evidenced by how most of them fall away when you switch to doing everything on distributions).

* eventually I learned some differential geometry and Lie theory and learned that addition is actually a pretty weird concept; in most geometries you can't "add" vectors that are far apart; only things that are locally linear can be added. So I had to repair my intuition again: a vector is a local linearization of something that might be macroscopically, and the linearity is what makes it possible to add and scalar-multiply it. And also that there is functionally no difference between composing vectors with addition or multiplication, they're just notations.

At no point in this were the axioms of vector spaces (or normed vector spaces, Banach spaces, etc) useful at all for understanding. I still find them completely unhelpful and would love to read books on higher mathematics that omit all of the axiomatizations in favor of intuition. Unfortunately the more advanced the mathematics, the more formalized the texts on it get, which makes me very sad. It seems very clear that there are two (or more) distinct ways of thinking that are at odds here; the mathematical tradition heavily favors one (especially since Bourbaki, in my impression) and physics is where everyone who can't stand it ends up.

olddustytrail

> infinite dimensions for functions is often more effective when thought of as around 8:00

I guess it works if you look at it sideways.

layer8

Well, yeah, function spaces are an example of vector spaces: https://en.wikipedia.org/wiki/Vector_space#Function_spaces

ttoinou

Isn't this the opposite way ? Vectors are functions whose input space are discrete dimensions. Let's not pretend going from natural numbers to real is "simple", reals numbers are a fascinating non-obvious math discovery. And also the passage from a few numbers to all natural numbers (aleph0) is non obvious. So basically we have two alephs passages to transforms N-D vectors as functions over reals.

xeonmc

Vectors are not necessarily discrete-domained. Anything that satisfies the vector space properties is a vector.

ttoinou

I agree but I'm operating under the assumption of the article

  Conceptualizing functions as infinite-dimensional vectors lets us apply the tools of linear algebra to a vast landscape of new problems

layer8

Linear algebra isn’t limited to discrete-dimensional vector spaces. Or what do you mean?

gizmo686

Vectors are an abstract notion. If you have two sets and two operations that satisfy the definition of a vector space, then you have a vector space; and we refer to elements of the vector set as "vectors" within that vector space.

The observation here is that set of real value functions, combined with the set of real numbers, and the natural notion of function addition and multiplication by a real number satisfies the definition of a vector space. As a result all the results of linear algebra can be applied to real valued functions.

It is true that any vector space is isomorphic to a vector space whose vectors are functions. Linear algebra does make a lot of usage of that result, but it is different from what the article is discussing.

a3w

Nice: the variable l and m values can allow you to get orbitals from chemistry.

(This is where I learned at least half of the math on this page: theoretical chemistry.)

null

[deleted]

xeonmc

also known as Applied Quantum Mechanics.

77pt77

Any basic liniear algebra course should talk about this, at least in the finite dimensional case.

Polynomials come to mind.

malwrar

So cool! This is the first time I’ve ever read about a math idea and felt a deep pull to know more.

tempodox

Oh, my. Alice, meet rabbit hole.

EGreg

Only functions on a finite domain are vectors.

Functions on a countable domain are sequences.