The paper that most influential for me when I built my first toy particle systems is this paper by Andrew Witkin. Now I need to decide whether I am going to follow this pattern, or use another.
The defining features of the Witkin model as I see are:
- Particles are objects(structures), aggregated into a particle system object
- The system state, and its derivatives, are scattered to a ‘state vector’ for some operations
- The solver methods know about the Particle data structure
Now, I’ve dreamed about writing a particle system that uses a completely generic ODE solver, one that only knows about the state vector (just a list of variables) and its derivative, and nothing about a “particle” object. The problem is that any multi-step integration method needs to call the subroutine that calculates the time derivatives of each variable. Unless you want to write this subroutine in a kind of cypher. Imagine s is our state vector – s is position in the x direction, s position in the y direction. It’s much easier to write a position dependent force in terms like particle[i].position rather than having to remember which state vector index corresponds to which dynamical variable.
The point I always come to when considering this (it’s come up a few times now) is this: is it really worth worrying about having a generic ODE solver, given that the solver code is not *that* long, and that the chances of ever wanting to use it for a system with oodles of degrees of freedom that is not a particle system is incredibly slim?
So, it’s OK for the solver to know about particles. Why then do we need to scatter our particle data to a vector. Because it’s faster to do things like increment the whole vector, rather than deal with multiple levels of indirection.
But is this still the case if we have our variables not linked to a particle structure, but instead a particle *system* structure. That is
I still don’t know the answer, still researching. I believe I have C code somewhere that implements both, so a few tests might be in order.