Shawn Hargreaves Blog
I find it interesting how some kinds of math work pretty much the same regardless of the number of dimensions. For instance vector and matrix code hardly changes when you move between two, three, or higher dimensions. You just alter the number of components in each vector, then proceed to apply the same computations the same way as before.
Newtonian physics tends to be implemented using vectors and matrices, which explains why it is so easy to apply the same "realistic" style physics to both 2D and 3D games.
Thought for the day: physicists suspect that what we perceive as 3D reality might in fact have many more dimensions. So what if we are all actually NPCs in a retro arcade game being played by an eleven dimensional teenager in his equivalent of MAME? At this very moment his friend could be saying something like "man, this takes me back. Remember when arcades were still popular? I miss the simplicity of gameplay back when everything was just 3D or 4D. And check out the size of those pixels! Crazy how we used to think that looked good, before everything went HD, with those giant blocky Planck units and no antialiasing!" But I digress.
Thought for the day: physicists suspect that what we perceive as 3D reality might in fact have many more dimensions. So what if we are all actually NPCs in a retro arcade game being played by an eleven dimensional teenager in his equivalent of MAME? At this very moment his friend could be saying something like "man, this takes me back. Remember when arcades were still popular? I miss the simplicity of gameplay back when everything was just 3D or 4D. And check out the size of those pixels! Crazy how we used to think that looked good, before everything went HD, with those giant blocky Planck units and no antialiasing!"
But I digress.
Vector math has the useful property that we can take a complex 3D problem, visualize and solve it in 2D, then apply our 2D solution back to the original 3D task. We do this all the time, often without consciously noticing the simplification.
But there are other kinds of math that do not generalize from 2D to higher dimensions!
In school, I spent much time solving problems that involved right angled triangles, the angles created by such triangles, the trigonometry that can be derived from these angles, etc. My math education was dominated by this Euclidean approach, where my first reflex was always to sketch the problem on paper, then do math that invariably involved angles.
Trouble is, this way of thinking is not so useful in 3D. There is no obvious 3D equivalent of a right angled triangle. Solutions that depend on angles and trigonometry work well in 2D, but are tricky and sometimes impossible to apply in 3D.
Here be dragons for programmers who are learning 3D. Naturally enough, most people want to keep using the math they learned in school, and which has served them well in previous 2D games. Don't do it! When there are more than two dimensions, angles are not your friend, and trigonometry should be avoided. Vectors, matrices, and their buddy the dot product will serve you better, thanks to their ability to work the same way with any number of dimensions.
My Math is very rusty as well, but Trigo ... is still applicable when dealing with Plane Equations and dot products. Example length of line of projection of one vector onto another, and distance of a point from a plane.
Then again, this could also be considered as a simplification of 3D space onto 2d concepts...
It's worth mentioning that linear algebra in general isn't limited by dimensionality. We use 4D matrices to represent affine transformations in 3D space (world/view/projection matrices being the most common) but that's a very graphics- and physics-centric limit. Optimisation problems such as inverse kinematics or more general 'best fit' problems might extend into the tens or hundreds of dimensions. Machine learning and AI algorithms might use linear algebra to do statistics over very high-dimensional spaces - in my day-job I often come across matrices of order 10,000 or higher. Working with such large dimensions poses conceptual and technical challenges that can't be easily visualised and solved as simple extensions of 2D and 3D systems. For example, finding the inverse of an order 10000 matrix shouldn't be implemented in code using a simple extension of the same method we probably learned on paper in school.
Thankfully there's a lot of good material out there for people who want to learn this stuff properly. Try googling a chap called Gilbert Strang. He's a maths/comp-sci professor at MIT who taught a couple of first-year courses on linear algebra and computational methods. The lectures are all available as videos online for free. The courses are really easy to follow and well paced for anyone with a decent high-school grounding in maths.
Funny that you mention to abandon trigonometry. In my signal processing class this was about what my processor said:
"So you do this by using cos here and there and there, however we are now collectively going to forget that cos exists, because its much easier to use Complex numbers"
It's not just 3D math that's better of without trigonometry, most of the math that involves them and can be replace with another way (Matrix/Vector/Complex numbers) does once you get into more complex numbers.
I think all problems can be solved by just having * - + ^ constants and variables. And that doing so will show you easier ways and shortcuts you would've never seen.
wrote a blog on spaces http://xahlee.org/math/symmetric_space.html , at bottom there's some explanation of the category of spaces.
anyway, some random comment, to the extend i understand it.
Hmm, the simplest way for me to solve problems in 3d is to project the problem to 2d solve it and unproject back to 3d.
I don't see such a disconnect with trig and vector/matrices like you do. They are both valuable tools to master and you can't truely understand one with out the other.
Remember the dot product is defined as ||a||||b||cos(theta).