this post was submitted on 31 Dec 2023
182 points (97.4% liked)
[Outdated, please look at pinned post] Casual Conversation
6586 readers
1 users here now
Share a story, ask a question, or start a conversation about (almost) anything you desire. Maybe you'll make some friends in the process.
RULES
- Be respectful: no harassment, hate speech, bigotry, and/or trolling
- Encourage conversation in your post
- Avoid controversial topics such as politics or societal debates
- Keep it clean and SFW: No illegal content or anything gross and inappropriate
- No solicitation such as ads, promotional content, spam, surveys etc.
- Respect privacy: Don’t ask for or share any personal information
Related discussion-focused communities
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Any time you need to analyze or synthesize a function or signal, rather than just a set finite set of values, the problem will in general be infinite-dimensional unless you choose to approximate it. Practically, most physics problems begin as a partial differential equation, i.e. the solution is a signal depending on both time and space. Hopefully, you can use problem symmetry and extra information to reduce the dimensionality of the problem, but sometimes you can't, or you can use the inherent structure of infinite-dimensional spaces to get exact results or better approximations.
Even if you can get the problem down to one dependent variable, a function technically needs an infinite number of parameters to be fully specified. You're in luck if your function has a simple rule like f(t) = sin(t), but you might not have access to the full rule that generated the function, or it might be too complicated to work with.
Let's say that you have a 3-dimensional vector in space; for example, v = (1,0,-1) (relative to some coordinate system; take a Euclidean basis for concreteness). Another way to represent that information is with the following function f(n) = {1 for n=1, 0 for n=2, -1 for n=3}. You can extend this representation for (countably) infinite vectors, i.e. sequences of numbers, by allowing n in f(n) to be any integer. For example, f(n) = n can be thought of as the vector (...,-2,-1,0,1,2,...). This representation also works when you allow n to be any real number. For example, f(n) = cos(n) and g(n) = e^n can be thought of as a gigantic vector, because af(n)+bg(n) is still a "gigantic vector" and functions like that satisfy the other properties needed to treat them like gigantic vectors.
This allows us to bring geometric concepts from space and apply them to functions. For example, we can typically define a metric to measure the distance between two functions. We can typically define a "norm" to talk about the size or energy of a signal. With a little bit of extra machinery (dot product), I can find the cosine between (real) functions and get the "angle" between them in function space. I can project a function onto another function, or a subspace of functions, using linear algebra extended to function spaces. This is how I would actually take that infinite-dimensional problem and approximate it: by projecting it onto a suitable finite basis of vectors and solving it in the approximation space.