OK, I’m no python wizard. And trying to learn Numpy at the same time as python itself adds a whole extra dimension of confuse.

As a physicist, I’m trained to be lazy. So I like to write my code to be independent of the number of dimensions it might be for. Case in point, sph kernels. Why write three different cases

if dims==1:

kernel = ....

elif dims==2:

kernel=...

When you can write

for i in range(0,dims):

kernel = kernel+next_dimensions_contribution

Numpy was seriously getting to me when I discovered that zero dimensional arrays cannot be indexed. This inability means the d[0] is fine for a two dimensional vector such as (17.8,44.3), but no good for just (88,2), even if they are boht numpy arrays. The solution turns out to be to use numpy.atleast_1d() to force the zero dimensional array to have a dimension.

It’s little problems like this that make numpy a little more user-unfriendly than it should be. It’s very powerful, however my opinion is that there are too many points of difference with fortran/matlab/octave, and too many gotchas like this that end up costing time for it to be a language for scientists who aren’t already programmatically inclined.

I think it will get there –